<<

gLite 3.1기반 Belle Grid 구축 보고서

머리말

본 책자는2008 년 노벨물리학상을 배출한 KEK 의 Belle 실험을 위한 그리드 인프라를 구축하는 과정을 기술하고 있습니다. Belle 실험의 그리드 인프라는 주로MC(Mote Carlo) Production 을 수행하고 있으며 최근 후속 실험인 Belle II 에서는 기하급수적으로 늘어나는 데이터량과 프로세스량을 처리하기 위해 적극

적으로 그리드 도입을 서두르고 있습니다. 이 책자를 통해 향후 국내 연구자 들을 위한Belle 및 Belle II 실험 지원에 도움이 될 수 있기를 기원합니다 .

이 책자를 만들어내기까지 도움을 주신 모든 분들께 감사드립니다.

2009년 11 월 15 일

작 성 자: 김 법 균 (KISTI) 유 진 승 (KISTI) 윤 희 준 (KISTI) 권 석 면 (KISTI) Christophe Bonnaud (KISTI) 박 형 우 (KISTI) 장 행 진 (KISTI) gLite 3.1기반 Belle Grid 구축 보고서

제목 차례

Glossary ······························································································································· 1

Ⅰ. 개요 ······························································································································· 4

1. KEK 소개 ······················································································································· 4

Ⅱ. Belle Grid 구축 ··········································································································· 9

1. Belle Grid 구축 개요 ··································································································· 9 2. CE (with lcg-CE) ·········································································································· 9 (1) VOMS인증서 설치 (Installation of VOMS Certificate) ································· 10 (2) Belle VO 가입 ······································································································ 10 (3) VO Parameter Setting ·························································································· 10 (4) Pool Account 생성 ······························································································· 11 (5) GIP (Generic Information Service) 설정 ··························································· 14 (6)로컬 큐 (Local Queue) 확인 및 Job Submission 설정 ··································· 15 3. WN (Worker Nodes) ····································································································· 16 4. 설치 테스트 ··················································································································· 16 (1) 로컬 스케쥴러 ······································································································ 16 (2) Information System ······························································································· 17 (3) 그리드 작업 수행 ································································································· 18 5. Belle Application Test ·································································································· 19 (1) Belle Application을 위한 NFS 요구사항 ·························································· 19 (2) NFS 서버 ·············································································································· 19 (3) WN및 CE ············································································································ 21

Appendix A. site-info.def for Belle VO ·········································································· 23 gLite 3.1기반 Belle Grid 구축 보고서

표 차례

<표 1> Current Belle Computing ····················································································· 8 gLite 3.1기반 Belle Grid 구축 보고서

그림 차례

그림1 KEK 에서 배출한 2008 년 노벨물리학상 ··························································· 5 그림2 Belle 실험 검출기 ································································································ 5 그림 3 KEK Organization Chart ······················································································ 6 그림4 KEKB 가속기 및 Belle 검출기 ·········································································· 7 그림 5 Belle Collaboration ······························································································· 7 그림6 2009 년 10 월 현재 Belle VO 지원 그리드 사이트 ·········································· 22 그림7 Belle Grid 활용 현황 1 ······················································································· 22 gLite 3.1기반 Belle Grid 구축 보고서

Glossary

□ AFS : Andrew File System □ API : Application Programming Interface □ BDII : Berkeley Database Information Index □ CASTOR : CERN Advanced STORage manager □ CE : Computing Element □ CERN : European Laboratory for Particle Physics □ ClassAd : Classified advertisement (Condor) □ CLI : Command Line Interface □ CNAF : INFN's National Center for Telematics and Informatics □ CREAM : Computing Resource Execution And Management □ dcap : dCache Access Protocol □ DIT : Directory Information Tree (LDAP) □ DLI : Data Location Interface □ DN : Distinguished Name □ EDG : European DataGrid □ EDT : European DataTAG □ EGEE : Enabling Grids for E-sciencE □ ESM : Experiment Software Manager □ FCR : Freedom of Choice for Resources □ FNAL : Fermi National Accelerator Laboratory □ FTS : File Transfer Service □ GFAL : Grid File Access Library □ GG : Grid Gate (aka gatekeeper) □ GGF : Global Grid Forum (now called OGF) □ GGUS : Global Grid User Support □ GIIS : Grid Index Information Server □ GLUE : Grid Laboratory for a Uniform Environment □ GMA : Grid Monitoring Archictecture □ GOC : Grid Operations Centre □ GRAM : Grid Resource Allocation Manager

- 1 - gLite 3.1기반 Belle Grid 구축 보고서

□ GRIS : Grid Resource Information Service □ GSI : Grid Security Infrastructure □ gsidcap : GSI-enabled version of the dCache Access Protocol □ gsirfio : GSI-enabled version of the Remote File Input/Output protocol □ GUI : Graphical User Interface □ GUID : Grid Unique ID □ HSM : Hierarchical Storage Manager □ ICE : Interface to CREAM Environment □ ID : Identifier □ INFN : Istituto Nazionale di Fisica Nucleare □ IS : Information Service □ JDL : Job Description Language □ kdcap : Kerberos-enabled version of the dCache Access Protocol □ LAN : Local Area Network □ LB : Logging and Bookkeeping Service □ LDAP : Lightweight Directory Access Protocol □ LFC : LCG File Catalogue □ LFN : Logical File Name □ LHC : Large Hadron Collider □ LCG : LHC Computing Grid □ LRC : Local Replica Catalogue □ LRMS : Local Resource Management System □ LSF : Load Sharing Facility □ MDS : Monitoring and Discovery Service □ MPI : Message Passing Interface □ MSS : Mass Storage System □ NS : Network Server □ OGF : Open Grid Forum (formerly called GGF) □ OS : Operating System □ PBS : Portable Batch System □ PFN : Physical File name □ PID : Process IDentifier □ POOL : Pool of Persistent Objects for LHC

- 2 - gLite 3.1기반 Belle Grid 구축 보고서

□ PPS : Pre-Production Service □ RAL : Rutherford Appleton Laboratory □ RB : Resource Broker □ RFIO : Remote File Input/Output □ R-GMA : Relational Grid Monitoring Archictecture □ RLI : Replica Location Index □ RLS : Replica Location Service □ RM : Replica Manager □ RMC : Replica Metadata Catalogue □ RMS : Replica Management System □ ROC : Regional Operations Centre □ ROS : Replica Optimization Service □ : Service Availability Monitoring □ SASL : Simple Authorization & Security Layer (LDAP) □ SE : Storage Element □ SFN : Site File Name □ SMP : Symmetric Multi Processor □ SN : Subject Name □ SRM : Storage Resource Manager □ SURL : Storage URL □ TURL : Transport URL □ UI : User Interface □ URI : Uniform Resource Identifier □ URL : Uniform Resource Locator □ UUID : Universal Unique ID □ VDT : Virtual Data Toolkit □ VO : Virtual Organization □ WLCG : Worldwide LHC Computing Grid □ WMS : Workload Management System □ WN : Worker Node □ WPn : Work Package #n

- 3 - gLite 3.1기반 Belle Grid 구축 보고서

Ⅰ 개요

1. KEK 소개

□ KEK :高エネルギー 加速器研究機構 Kō Enerugī Kasokuki Kenkyū Kikō ○ High Energy Accelerator Research Organization ○ 일본쯔꾸바(Tsukuba) 에위치한고에너지물리연구기관 ○ 1955년설립된 INS(Institute of Nuclear Study) 를모태로하여 1971 년 KEK 가 설립되어 오늘에 이름 ○ 전자양전자충돌가속기인- KEKB 검출기를활용한 Belle 실험을진행 ○ Kobayashi-Maskawa가 1975 년에발표한이론을최근 Belle 실험에서증명하여 Kobayshi와 Maskawa 가 2008 년노벨물리학상을수상 ○ 2009년초까지 KEKB 검출기를통해데이터획득후 3 년간 shutdown 기간을 통해가속기를upgrade 하여 2012 년부터기존의가속기보다 50 배이상많은 양의데이터를생산하는Belle II 실험을수행할계획임 ○ KEK는 High Energy Physics 를연구하는기관으로가속기를활용하여생성되 는 데이터를 분석하는 연구를 하고 있음 ○ Belle실험은 KEK 의여러실험중의하나이며 KEK 내의 IPNS(Institute of Particle and Nuclear Studies)에서 취급하고 있음 □ KEK의 컴퓨팅 기능 구성 ○ 다양한실험을동시에지원하는CRC(Computing Research Center) 와각실험별 컴퓨팅 센터가 존재함 ○ Belle실험의 경우 별도의 컴퓨팅 자원을 보유하고 있으며 -Factory 또는B-Computer 라 불림 ○ Belle Grid는 CRC 에서구축ㆍ 담당하고있음 – Belle Grid와관련한협력은 CRC 를통해이루어짐 – 실험데이터확보는Belle Computing 담당자와의협의를통해이루어져야 함 ○ Belle Computing의 책임자는 Nobuhiko Katayama 임

- 4 - gLite 3.1기반 Belle Grid 구축 보고서

노벨 물리학상 2008

16 Citation: 5483

그림 1. KEK에서 배출한 2008 년 노벨물리학상

그림 2 Belle 실험 검출기

- 5 - gLite 3.1기반 Belle Grid 구축 보고서

그림 3 KEK Organization Chart

- 6 - gLite 3.1기반 Belle Grid 구축 보고서

그림 4 KEKB가속기 및 Belle 검출기

그림 5 Belle Collaboration

- 7 - gLite 3.1기반 Belle Grid 구축 보고서

1999~ 2001~ 2006~ 2009~ (4 years) (5 years) (6 years) (continued)

~ 100 ~ 1200 ~ 42500 ~ 115200 CPU (SI2K) (WS) (WS + PC) (PC) (PC)

Disk (TB) 4 9 1000 1500

Tape (TB) 160 620 3500 3500

80 80 Workgroup 3 + 9 11 (+16 (+26 server (# hosts) file servers) file servers) User 23WS + Workstation (# 28WS + 68X 128PC 128PC 100PC hosts) 표 1 Current Belle Computing

- 8 - gLite 3.1기반 Belle Grid 구축 보고서

Ⅱ Belle Grid 구축

1. Belle Grid 구축 개요

□ Belle Grid는 WLCG 인프라에서 동작하므로 기본적으로 gLite 미들웨어를 기반으 로 그리드가 구축된다. ○ 즉기존의, LCG-CE (gLite3.1) 를 Belle VO 가이용할수있도록설정해주어야 한다. □ Belle VO가 로컬 자원을 이용할 수 있도록 하기 위해 필요한 절차를 요약하면 다음과 같다. ○ CE (with lcg-CE) – Install VOMS Certificate – Join Belle VO – VO Parameter Setting – Create Pool accounts – Configure Information Service (GIP) – Local Queue & Configure Job Submission ○ WN – Install VOMS Certificate – VO Parameter Setting – Create Pool accounts ○ Installation Test – Local Scheduler – Information System – Grid Job Submission ○ Belle Application Installation

2. CE (with lcg-CE)

□ 전체적인과정은VOMS 인증서를설치 , Belle VO 가입 , VO 파라미터세팅 , Pool Account설정및생성 , Job Submission 설정등의과정을거친다 .

- 9 - gLite 3.1기반 Belle Grid 구축 보고서

(1) VOMS인증서 설치 (Installation of VOMS Certificate)

□ 기존에는 http://voms.kek.jp에 방문해서VOMS Cert 를다운로드받았으나 2009 년 10월을기점으로 KEK 내 CRC(Computing Research Center) 에서관리하는서버 (voms.cc.kek.jp)로 통합되었다 . □ 아래사이트를방문하여/etc/grid-security/vomsdir 디렉토리에인증서를설치한다 . ○ https://cic.gridops.org/index.php?section=vo&page=homepage ○ https://cic.gridops.org/downloadRP.php?section=database&rpname=certificate&vo=bell e&vomsserver=voms.cc.kek.jp □ [email protected] 메일링 리스트에 가입하면 주기적인 정보 업데이트와 함께 도움을 받을 수 있다.

(2) B elle V O 가입

□ BelleVO를지원하는팜의관리를위해서는관리자가 BelleVO 에가입해야한다 . □ 아래 사이트에 방문해서 가입할 수 있다. ○ https://voms.cc.kek.jp:8443/voms/belle

(3) VO Parameter Setting

□ 보다 자세한 정보는 https://voms.kek.jp:8443/voms/belle/webui/config 에 있다. □ 관리를 위한 기본 인터페이스는 https://voms.cc.kek.jp:8443/voms/belle

□ vomses파일 수정 (/opt/glite/etc/vomses/belle-voms.cc.kek.jp)

> cat /opt/glite/etc/vomses/belle-voms.kek.jp "belle" "voms.kek.jp" "15020" "/C=JP/O=KEK/OU=CRC/CN=host/voms.kek.jp" "belle"

○ From Oct. 2009,

> cat /opt/glite/etc/vomses/belle-voms.cc.kek.jp "belle" "voms.cc.kek.jp" "15020" "/C=JP/O=KEK/OU=CRC/CN=host/voms.cc.kek.jp" "belle"

- 10 - gLite 3.1기반 Belle Grid 구축 보고서

□ mkgridmap파일 수정 (/opt/edg/etc/edg-mkgridmap.conf)

group vomss://voms.kek.jp:8443/voms/belle .belle

○ From Oct. 2009,

group vomss://voms.cc.kek.jp:8443/voms/belle .belle

□ lsc파일을 이용하면 향후 관리가 편해진다 .

> cat /etc/grid-security/vomsdir/belle/voms.kek.jp.lsc /C=JP/O=KEK/OU=CRC/CN=host/voms.kek.jp /C=JP/O=KEK/OU=CRC/CN=KEK GRID Certificate Authority

○ From Oct. 2009,

$ /etc/grid-security/vomsdir/belle/voms.cc.kek.jp.lsc /C=JP/O=KEK/OU=CRC/CN=host/voms.cc.kek.jp /C=JP/O=KEK/OU=CRC/CN=KEK GRID Certificate Authority

□ Yaim설정을 위해 site-info.def 를 변경한다 .

$ cat $GLITE_LOCATION/yaim/etc/site-info.def VO_BELLE_VOMS_SERVERS="'vomss://voms.cc.kek.jp:8443/voms/belle/'" VO_BELLE_VOMSES="'belle voms.cc.kek.jp 15020 /C=JP/O=KEK/OU=CRC/CN=host/voms.cc.kek.jp belle'" VO_BELLE_VOMS_CA_DN="'/C=JP/O=KEK/OU=CRC/CN=KEK GRID Certificate Authority'"

(4) Pool Account 생성

□ 다른VO 와마찬가지로 sgm 어카운트에대해서는의견이분분하다 . 단일 sgm 어 카운트가 설치과정에서 편의성을 제공하지만pool 어카운트는 유연성을 제공한 다. □ Belle측에서는 단일 sgm 어카운트와 약 50 개의 사용자 어카운트를 설치할 것을 권장하고 있다.

- 11 - gLite 3.1기반 Belle Grid 구축 보고서

□ groups.conf ($YAIM_HOME/etc/groups.conf)

> cat /opt/glite/yaim/etc/groups.conf "/VO=belle/GROUP=/belle/ROLE=lcgadmin":::sgm: "/VO=belle/GROUP=/belle/ROLE=production":::prd: "/VO=belle/GROUP=/belle"::::

○ users.conf ($YAIM_HOME/etc/users.conf)

> cat /opt/glite/yaim/etc/users.conf 999901:belle001:99100:belle:belle:: 999902:belle002:99100:belle:belle:: ... 999950:belle050:99100:belle:belle:: 999901:prdbelle:99101:belleprd:belle:prd: 999951:sgmbelle:99102:bellesgm, belle:belle:sgm:

□ site-info.def ($YAIM_HOME/etc/users.conf)

# cat /opt/glite/yaim/etc/site-info.def | -i BELLE VOS="alice dteam ops belle" BELLE_GROUP_ENABLE="belle" VO_BELLE_SW_DIR=$VO_SW_DIR/belle VO_BELLE_DEFAULT_SE=$DPM_HOST VO_BELLE_STORAGE_DIR=$DPM_HOST/belle VO_BELLE_VOMS_SERVERS="'vomss://voms.kek.jp:8443/voms/belle?/belle/'" VO_BELLE_VOMSES="'belle voms.kek.jp 15020 /C=JP/O=KEK/OU=CRC/CN=host/voms.kek.jp belle'" VO_BELLE_VOMS_CA_DN="'/C=JP/O=KEK/OU=CRC/CN=KEK GRID Certificate Authority'"

○ From Oct. 2009,

VO_BELLE_VOMS_SERVERS="'vomss://voms.cc.kek.jp:8443/voms/belle/'" VO_BELLE_VOMSES="'belle voms.cc.kek.jp 15020 /C=JP/O=KEK/OU=CRC/CN=host/voms.cc.kek.jp belle'" VO_BELLE_VOMS_CA_DN="'/C=JP/O=KEK/OU=CRC/CN=KEK GRID Certificate Authority'"

- 12 - gLite 3.1기반 Belle Grid 구축 보고서

□ YAIM을이용해 Pool 어카운트를생성한다 .

○ 매핑 정보를 생성한다. – config_mkgridmap 함수 실행

> /opt/glite/yaim/bin/yaim -r -s /opt/glite/yaim/etc/site-info.def -n lcg-CE -n TORQUE_server -n TORQUE_utils -f config_mkgridmap ... Installed YAIM versions: glite-yaim-lcg-ce 4.0.4-4

#################################################################### INFO: The default location of the grid-env.(c)sh files will be: /opt/glite/etc/profile.d INFO: Sourcing the utilities in /opt/glite/yaim/functions/utils INFO: Detecting environment INFO: Assuming the node types: lcg-CE TORQUE_server TORQUE_utils BDII_site INFO: Using hostname: ce-alice.sdfarm.kr INFO: Executing function: config_mkgridmap_check INFO: Executing function: config_mkgridmap_setenv INFO: Executing function: config_mkgridmap INFO: Now creating the grid-mapfile - this may take a few minutes... INFO: YAIM terminated succesfully.

○ 로컬 어카운트를 생성한다. – config_users 함수 실행

> /opt/glite/yaim/bin/yaim -r -s /opt/glite/yaim/etc/site-info.def -n lcg-CE -n TORQUE_server -n TORQUE_utils -f config_users

○ 만약 위 과정이 원활하지 않다면 직접 수정한다.

> cat /opt/edg/etc/lcmaps/groupmapfile ... "/VO=belle/GROUP=/belle/ROLE=lcgadmin/Capability=NULL" bellesgm "/VO=belle/GROUP=/belle/ROLE=lcgadmin" bellesgm "/VO=belle/GROUP=/belle/ROLE=production/Capability=NULL" belleprd "/VO=belle/GROUP=/belle/ROLE=production" belleprd

- 13 - gLite 3.1기반 Belle Grid 구축 보고서

"/VO=belle/GROUP=/belle/Role=NULL/Capability=NULL" belle "/VO=belle/GROUP=/belle" belle ...

> cat /opt/edg/etc/lcmaps/gridmapfile ... "/VO=belle/GROUP=/belle/ROLE=lcgadmin/Capability=NULL" .sgmbelle "/VO=belle/GROUP=/belle/ROLE=lcgadmin" .sgmbelle "/VO=belle/GROUP=/belle/ROLE=production/Capability=NULL" prdbelle "/VO=belle/GROUP=/belle/ROLE=production" prdbelle "/VO=belle/GROUP=/belle/Role=NULL/Capability=NULL" .belle "/VO=belle/GROUP=/belle" .belle ...

(5) GIP (Generic Information Service) 설정

□ Information Service를통해로컬자원이 Belle VO 를지원한다는사실을알려야한 다. 보통 YAIM 을통해설정하는것이가장손쉬운방법이다 . ○ config_gip 함수 실행

> ./bin/yaim -r -s ./etc/site-info.def -n lcg-CE -n TORQUE_server -n TORQUE_utils -f config_gip ... Installed YAIM versions: glite-yaim-lcg-ce 4.0.4-4

#################################################################### INFO: The default location of the grid-env.(c)sh files will be: /opt/glite/etc/profile.d INFO: Sourcing the utilities in /opt/glite/yaim/functions/utils INFO: Detecting environment INFO: Assuming the node types: lcg-CE TORQUE_server TORQUE_utils BDII_site INFO: Using hostname: ce-alice.sdfarm.kr INFO: Executing function: config_gip Adding user edginfo to group infosys Adding user edguser to group infosys Adding user rgma to group infosys INFO: YAIM terminated succesfully.

- 14 - gLite 3.1기반 Belle Grid 구축 보고서

(6)로컬 큐 (LocalQueue) 확인 및 JobSubmission 설정

□ Belle VO를 위한 전용 큐가 제대로 생성되었는지 확인해야 한다 . ○ 혹, 설정이되어있지않거나잘못되어있을경우 site-info.def 를참조한다 .

> qstat -Q Queue Max Tot Ena Str Que Run Hld Wat Trn Ext T ------ops 00yesyes00 0000E alice 0 0 yes yes 0 0 0 0 0 0 E belle 0 0 yes yes 0 0 0 0 0 0 E dteam 0 0 yes yes 0 0 0 0 0 0 E

> grep BELLE_GROUP_ENABLE /opt/glite/yaim/etc/site-info BELLE_GROUP_ENABLE="belle"

□ Belle VO사용자들을위해생성한 Pool 어카운트에적절한 Job Submission 권한 이 지정되어 있어야 한다.

> qmgr -c "print server" ... # # Create and define queue belle # create queue belle set queue belle queue_type = Execution set queue belle resources_max.cput = 48:00:00 set queue belle resources_max.walltime = 72:00:00 set queue belle acl_group_enable = True set queue belle acl_groups = belle set queue belle enabled = True set queue belle started = True ...

> qmgr -c "set queue belle acl_groups += bellesgm > qmgr -c "print server" ... #

- 15 - gLite 3.1기반 Belle Grid 구축 보고서

# Create and define queue belle # create queue belle set queue belle queue_type = Execution set queue belle resources_max.cput = 48:00:00 set queue belle resources_max.walltime = 72:00:00 set queue belle acl_group_enable = True set queue belle acl_groups = belle set queue belle acl_groups += bellesgm set queue belle enabled = True set queue belle started = True ...

3. WN (Worker Nodes)

□ 각WN 에아래와같은작업을수행한다 . CE 와동일한절차를따른다 . ○ VOMS Certificate설치 (CE 와 동일 ) ○ VO Parameter설정 (CE 와 동일 ) ○ Pool Account생성 (CE 와는다르게 mkgridmap 함수를실행할필요가없다 .)

> ./bin/yaim -r -s ./etc/site-info.def -n glite-WN -n TORQUE_client -f config_users

□ 모든WN 들은동일한 VO parameter 설정과 pool 어카운트 (UID) 를가져야한다 .

4. 설치 테스트

(1) 로컬 스케쥴러

□ Belle VO를 위한 각 로컬 어카운트가 작업을 제출할 수 있는지 테스트한다 .

root > su - bellesgm

root > cat ./testJob.sh #!/bin/sh

pwd

- 16 - gLite 3.1기반 Belle Grid 구축 보고서

echo I am $(whoami) on $(hostname)

bellesgm > qsub -q belle ./testJob.sh 159756.ce-alice.sdfarm.kr bellesgm > cat ./testJob.sh.o159756 /home/sgmbelle02 I am bellesgm on twn098.sdfarm.kr

(2) Information System

□ 로컬사용자가Belle VO 로프록시를생성할수있는지테스트한다 .

> voms-proxy-init --voms belle Cannot find file or dir: /home/***/.glite/vomses Enter GRID pass phrase: Your identity: /C=KR/O=KISTI/O=GRID/O=KISTI/CN=*** Creating temporary proxy ...... Done Contacting voms.kek.jp:15020 [/C=JP/O=KEK/OU=CRC/CN=host/voms.kek.jp] "belle" Done Creating proxy ...... Done Your proxy is valid until Fri Feb 20 01:09:41 2009 user > voms-proxy-info -all subject : /C=KR/O=KISTI/O=GRID/O=KISTI/CN=***/CN=proxy issuer : /C=KR/O=KISTI/O=GRID/O=KISTI/CN=*** identity : /C=KR/O=KISTI/O=GRID/O=KISTI/CN=*** type : proxy strength : 512 bits path : /tmp/x509up_u501 timeleft : 11:59:41 === VO belle extension information === VO : belle subject : /C=KR/O=KISTI/O=GRID/O=KISTI/CN=*** issuer : /C=JP/O=KEK/OU=CRC/CN=host/voms.kek.jp attribute : /belle/Role=NULL/Capability=NULL timeleft : 11:59:40

□ 새로설정된CE 가 Belle VO 내에존재하는지확인하기위해 lcg-infosites 를실행 한다.

- 17 - gLite 3.1기반 Belle Grid 구축 보고서

○ 이 작업을 위해서는 프록시 생성이 전제 되어야 한다.

> lcg-infosites --vo belle ce | grep sdfarm.kr 112 112 0 0 0 ce-alice.sdfarm.kr:2119/jobmanager-lcgpbs-belle

(3) 그리드 작업 수행

□ Authentication test ○ Belle VO로 프록시를 생성한 이후에 진행해야 한다 .

> globusrun -a -r ce-alice.sdfarm.kr

GRAM Authentication test successful

□ Simple job submission test (globus-job-run & time)

> time globus-job-run ce-alice.sdfarm.kr `which id` uid=21047(belle047) gid=2100(belle) groups=2100(belle)

real 0m0.486s user 0m0.129s sys 0m0.040s

□ Simple job submission test (globus-job-run with parameter)

> globus-job-run ce-alice.sdfarm.kr `which tail` /var/spool/maui/maui.cfg LOGLEVEL 1

# Set the delay to 1 minute before Maui tries to run a job again, # in case it failed to run the first time. # The default value is 1 hour.

DEFERTIME 00:01:00

# Necessary for MPI grid jobs ENABLEMULTIREQJOBS TRUE

5. Belle Application Test

- 18 - gLite 3.1기반 Belle Grid 구축 보고서

□ CE와 WN 모두 Belle Application 을동일한 NFS 서비스를통해공유해야한다 . □ Belle Application은 sgm account 를통해설치되어야한다 . □ Belle Application은 postgresql 를사용하며 CE 와각 WN 들은이서비스를이용할 수 있도록DB 서버가 설정되어 있어야 한다 .

(1) Belle Application을 위한 NFS 요구사항

○ NFS Server – > 30 GB disk space – mounted to all worker nodes – access by globus-job-run command – set the directory to $VO_BELLE_SW_DIR – database accessible (port 5432) from all worker nodes ․ installation and operation by bellesgm without root privilege ․ database soft: PostgreSQL? ○ WN & CE – kernel parameter ․ kernel.msgmnb = 65536 ․ kernel.msgmni = 128 ․ kernel.msgmax = 32768 – Walltime limit ․ 336 hours if possible

(2) N FS 서버

□ WN의어카운트들은공유된영역에대해쓰기권한 (wirte permission) 을가져야 한다.

> yum -y install nfs

> cat /etc/exports #/opt/xcat (ro,no_root_squash,sync) # WNs & CE of the cluster for Belle Experiment /opt/exp_soft/belle *.**.***(rw,sync)

- 19 - gLite 3.1기반 Belle Grid 구축 보고서

/opt/exp_soft/belle .**.***(rw,sync) /opt/exp_soft/belle .**.***(rw,sync) ... /opt/exp_soft/belle *.***.***(rw,sync) /opt/exp_soft/belle *.***.***(rw,sync) # UI machine for installation test from local account /opt/exp_soft/belle *.***.***.***(rw,sync)

□ 관리자어카운트(admin user) 를생성한다 . ○ 이어카운트는bellle sgm account 와동일한 uid 및 gid 를가져야한다 . ○ CE및 WN 의 groups.conf 와 users.conf 를 참조한다 .

> groupadd -g 99100 belle > groupadd -g 99102 bellesgm > useradd -u 999951 -g bellesgm -G belle -d /home/sgmbelle01 -m -k /etc/skel sgmbelle01 > id sgmbelle01 uid=999951(sgmbelle01) gid=99102(bellesgm) groups=99100(belle),99102(bellesgm)

□ 환경변수$VO_BELLE_SW_DIR 를 설정한다 . ○ 만약NFS 서버가 gLite 를설치해서쓰고있다면 WN 및 CE 와동일한값을가 져야 한다.

> grep VO_BELLE_SW_DIR /opt/glite/etc/profile.d/grid-env.sh gridenv_set "VO_BELLE_SW_DIR" "/opt/exp_soft/belle"

○ 환경변수$VO_BELLE_SW_DIR 에해당하는디렉토리를생성하고소유권을부 여한다.

> mkdir /opt/exp_soft/belle > chown -R sgmbelle01.bellesgm $VO_BELLE_SW_DIR

□ PostgreSQL서버에서 5432 포트를개방해야한다 .

(3) WN및 CE

- 20 - gLite 3.1기반 Belle Grid 구축 보고서

□ 환경변수$VO_BELLE_SW_DIR 를 설정한다 . ○ WN및 CE 가동일한값을가져야한다 .

> grep VO_BELLE_SW_DIR /opt/glite/etc/profile.d/grid-env.sh gridenv_set "VO_BELLE_SW_DIR" "/opt/exp_soft/belle"

○ 환경변수$VO_BELLE_SW_DIR 에해당하는디렉토리를생성하고소유권을부 여한다.

> mkdir /opt/exp_soft/belle > chown -R sgmbelle01.bellesgm $VO_BELLE_SW_DIR

□ NFS서버로부터 공유된 영역을 마운트 한다 .

> mkdir /opt/exp_soft/belle

> mount -t nfs :/opt/exp_soft/belle /opt/exp_soft/belle or > grep belle /etc/fstab :/opt/exp_soft/belle /opt/exp_soft/belle nfs defaults 0 0

□ 커널 파라미터를 아래와 같이 설정한다.

> sysctl -p | grep kernel.msg kernel.msgmnb = 65536 kernel.msgmni = 128 kernel.msgmax = 32768

- 21 - gLite 3.1기반 Belle Grid 구축 보고서

그림 6 2009년 10 월 현재 Belle VO 지원 그리드 사이트

그림 7 Belle Grid활용 현황 1

- 22 - gLite 3.1기반 Belle Grid 구축 보고서

Appendix 1. site-info.def for Belle VO

########################################################################### ### # Copyright (c) Members of the EGEE Collaboration. 2004. # See http://www.eu-egee.org/partners/ for details on the copyright # holders. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS # OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ########################################################################### ### # # NAME : site-info.def # # DESCRIPTION : This is the main configuration file needed to execute the # yaim command. It contains a list of the variables needed to # configure a service. # # AUTHORS : [email protected] # # NOTES : - site-info.def currently contains the whole list of variables # needed to configure a site. However we have started to move # towards a new approach where node type specific variables will # be distributed by its corresponding module. # Although a unique site-info.def can still be used at configuration time. # # - Service specific variables will be distributed under

- 23 - gLite 3.1기반 Belle Grid 구축 보고서

# /opt/glite/yaim/examples/siteinfo/services/glite_ # The definition of the variables can be done there or copy them in site-info.def. # # - VO variables are currently distributed for a number of VOs with # real values that can be directly used by sys admins. # We have started to move towards a new approach where yaim will no longer distribute # these variables. Instead, VO values will be downloaded directly from the CIC # portal and will be integrated using the YAIM configurator. # # - For more information on YAIM, please check: # https://twiki.cern.ch/twiki/bin/view/EGEE/YAIM # # - For more details on the CIC portal, visit: # http://cic.in2p3.fr/ # To know more about the YAIM configurator go to the VO management section. # # YAIM MODULE: glite-yaim-core # ########################################################################### ###

########################## # YAIM related variables # ##########################

# This a variable to debug your configuration. # If it is set, functions will print debugging information. # Values: NONE, ABORT, ERROR, WARNING, INFO, DEBUG YAIM_LOGGING_LEVEL=INFO

# Repository settings # Be aware that the install option is only available for 3.0 services. # You can ignore this variables if you are configuring a 3.1 service. # LCG_REPOSITORY="'rpm http://glitesoft.cern.ch/EGEE/gLite/APT/R3.0/ rhel30 externals Release3.0 updates'"

- 24 - gLite 3.1기반 Belle Grid 구축 보고서

LCG_REPOSITORY="'rpm http://linuxsoft.cern.ch/EGEE/gLite/R3.1/ rhel31 externals Release3.1 updates'" CA_REPOSITORY="rpm http://linuxsoft.cern.ch/ LCG-CAs/current production" REPOSITORY_TYPE="yum"

################################### # General configuration variables # ###################################

# MY_DOMAIN=$(hostname -d) MY_DOMAIN=sdfarm.kr INSTALL_ROOT=/opt

# These variables tell YAIM where to find additional configuration files. WN_LIST=/opt/glite/yaim/etc/wn-list.conf USERS_CONF=/opt/glite/yaim/etc/users.conf GROUPS_CONF=/opt/glite/yaim/etc/groups.conf FUNCTIONS_DIR=/opt/glite/yaim/functions

# Set this to "yes" if your site provides an X509toKERBEROSOSuthentication Server # Only for sites with Experiment Software Area under AFS GSSKLOG=no GSSKLOG_SERVER=my-gssklog.$MY_DOMAIN

OUTPUT_STORAGE=/tmp/jobOutput JAVA_LOCATION="/usr/java/jdk1.5.0_14"

# Set this to '/dev/null' or some other dir if you want # to turn off yaim installation of cron jobs CRON_DIR=/etc/cron.d

# Set this to your prefered and firewall allowed port range GLOBUS_TCP_PORT_RANGE="20000,25000"

# Choose a good password ! # And be sure that this file cannot be read by any grid job ! MYSQL_PASSWORD=*******

# Site-wide settings [email protected]

- 25 - gLite 3.1기반 Belle Grid 구축 보고서

SITE_CRON_EMAIL=$SITE_EMAIL # not yet used will appear in a later release SITE_SUPPORT_EMAIL=$SITE_EMAIL SITE_SECURITY_EMAIL=$SITE_EMAIL SITE_DESC="NSDC,KIST,South Korea" SITE_OTHER_GRID="WLCG|EGEE" SITE_NAME=KR-KISTI-GCRT-01 SITE_LOC="Daejeon, Korea" SITE_LAT=36.366 # -90 to 90 degrees SITE_LONG=127.366 # -180 to 180 degrees SITE_WEB="unset" # SITE_TIER="TIER 2" # SITE_SUPPORT_SITE="unset" #SITE_HTTP_PROXY="myproxy.my.domain"

# Set this if your WNs have a shared directory for temporary storage CE_DATADIR=""

############################## # CE configuration variables # ##############################

CE_HOST=ce-alice.sdfarm.kr

# Architecture and enviroment specific settings CE_CPU_MODEL=Xeon CE_CPU_VENDOR=intel CE_CPU_SPEED=2000 CE_OS="ScientificCERNSLC" CE_OS_RELEASE=4.6 CE_OS_VERSION="Baryllium" # CE_OS_ARCH should be set to result of `uname -m` runned on WN CE_OS_ARCH=i686 CE_MINPHYSMEM=16384 CE_MINVIRTMEM=2048 CE_LOGCPU=112 CE_PHYSCPU=28 CE_SMPSIZE=8 CE_SI00=1075 CE_SF00=0

CE_OTHERDESCR="Cores=4"

- 26 - gLite 3.1기반 Belle Grid 구축 보고서

CE_CAPABILITY="CPUScalingReferenceSI00=1075"

CE_OUTBOUNDIP=TRUE CE_INBOUNDIP=FALSE CE_RUNTIMEENV=" LCG-2 LCG-2_1_0 LCG-2_1_1 LCG-2_2_0 LCG-2_3_0 LCG-2_3_1 LCG-2_4_0 LCG-2_5_0 LCG-2_6_0 LCG-2_7_0 GLITE-3_0_0 GLITE-3_1_0 R-GMA "

############################## # RB configuration variables # ##############################

RB_HOST=rb.$MY_DOMAIN

############################### # WMS configuration variables # ###############################

WMS_HOST=wmslb.$MY_DOMAIN

############################### # ADDED by KYUN # ###############################

LB_HOST=WMS_HOST

################################### # myproxy configuration variables # ###################################

- 27 - gLite 3.1기반 Belle Grid 구축 보고서

PX_HOST=px.$MY_DOMAIN # PX_HOST=myproxy.cern.ch

# GRID_TRUSTED_BROKERS: DNs of services (RBs) allowed to renew/retrives # credentials from/at the myproxy server. Put single quotes around each trusted DN !!!

GRID_TRUSTED_BROKERS=" '/C=KR/O=KISTI/O=GRID/O=KISTI/CN=host/rb.sdfarm.kr' '/C=KR/O=KISTI/O=GRID/O=KISTI/CN=host/wmslb.sdfarm.kr' "

################################ # RGMA configuration variables # ################################

MON_HOST=rgma.$MY_DOMAIN REG_HOST=lcgic01.gridpp.rl.ac.uk

################################### # FTS configuration variables # ###################################

# FTS_HOST=fts.$MY_DOMAIN # FTS_HOST=fts.gridcenter.or.kr # FTS_SERVER_URL="https://fts.${MY_DOMAIN}:8443/path/glite-data-transfer-fts"

############################### # LFC configuration variables # ###############################

LFC_HOST=lfc.$MY_DOMAIN

LFC_DB_PASSWORD=*****

# Default value is to put the standard database on the LFC host LFC_DB_HOST=$LFC_HOST LFC_DB=cns_db

# If you use a DNS alias in front of your LFC, specify it here

- 28 - gLite 3.1기반 Belle Grid 구축 보고서

LFC_HOST_ALIAS=""

# All catalogues are local unless you add a VO to # LFC_CENTRAL, in which case that will be central LFC_CENTRAL=""

# If you want to limit the VOs your LFC serves, add the locals here LFC_LOCAL="alice dteam ops"

######################################### # Torque server configuration variables # #########################################

# Change this if your torque server is not on the CE # This is ignored for other batch systems BATCH_SERVER=$CE_HOST # It obsoletes $TORQUE_SERVER # TORQUE_SERVER=$CE_HOST

# Jobmanager specific settings JOB_MANAGER=lcgpbs CE_BATCH_SYS=torque BATCH_BIN_DIR=/usr/bin BATCH_VERSION=torque-2.1.9-4cri BATCH_LOG_DIR=/var/spool/pbs/server_priv/accounting

################################# # VOBOX configuration variables # #################################

VOBOX_HOST=vobox-alice.$MY_DOMAIN VOBOX_PORT=1975

################################ # APEL configuration variables # ################################

APEL_DB_PASSWORD=*****

########################################## # Gridice server configuration variables # ##########################################

- 29 - gLite 3.1기반 Belle Grid 구축 보고서

# GridIce server host name (usually run on the MON node). GRIDICE_SERVER_HOST=$MON_HOST

#################################### # E2EMONIT configuration variables # ####################################

# This specifies the location to download the host specific configuration file E2EMONIT_LOCATION=grid-deployment.web.cern.ch/grid-deployment/e2emonit/prod uction

# Replace this with the siteid supplied by the person setting up the networking # topology. E2EMONIT_SITEID=my.siteid

###################################### # SE classic configuration variables # ######################################

# Classic SE CLASSIC_HOST="se0.sdfarm.kr" # CLASSIC_STORAGE_DIR="/storage/gridcenter.or.kr/grid"

################################## # dcache configuration variables # ##################################

# dCache-specific settings # ignore if you are not running d-cache

# Your dcache admin node DCACHE_ADMIN=""

# Pools must include host:/absolutePath and may optionally include # size host:size:/absolutePath if the size is not set the pool will # fill the partition it is installed upon. size cannot be smaller # than 4 (Gb) unless you are an expert.

- 30 - gLite 3.1기반 Belle Grid 구축 보고서

DCACHE_POOLS="my-pool-node1:[size]:/pool-path1 my-pool-node2:/pool-path2"

# Optional

# For large sites the load on the admin-node is a limiting factor. Pnfs # accounts for a lot of this load and so can be placed on a different # node to balance the load better.

# Set DCACHE_DOOR_* to "off" if you dont want the door to start on any host #

# DCACHE_DOOR_SRM="door_node1[:port]" # DCACHE_DOOR_GSIFTP="door_node1[:port] door_node2[:port]" # DCACHE_DOOR_GSIDCAP="door_node1[:port] door_node2[:port]" # DCACHE_DOOR_DCAP="door_node1[:port] door_node2[:port]" # DCACHE_DOOR_XROOTD="door_node1[:port] door_node2[:port]" # DCACHE_DOOR_LDAP="admin_node" # DCACHE_DOOR_XROOTD="door_node1[:port] door_node2[:port]"

# This option sets the pnfs server it defaults to the admin node if # not stated. # # DCACHE_PNFS_SERVER="pnfs_node" # # Sets the portrange for dcache as a GSIFTP server in "passive" mode # # DCACHE_PORT_RANGE_PROTOCOLS_SERVER_GSIFTP=50000,52000 # # Sets the portrange for dcache as a (GSI)DCAP and xrootd server in # "passive" mode # # DCACHE_PORT_RANGE_PROTOCOLS_SERVER_MISC=60000,62000 # # Sets the portrange for dcache as a GSIFTP client in "active" mode # # DCACHE_PORT_RANGE_PROTOCOLS_CLIENT_GSIFTP=33115,33215

# This option sets the pnfs server it defaults to the admin node if # not stated. #

- 31 - gLite 3.1기반 Belle Grid 구축 보고서

# DCACHE_PNFS_SERVER="pnfs_node" # # Sets the portrange for dcache as a GSIFTP server in "passive" mode # # DCACHE_PORT_RANGE_PROTOCOLS_SERVER_GSIFTP=50000,52000 # # Sets the portrange for dcache as a (GSI)DCAP and xrootd server in # "passive" mode # # DCACHE_PORT_RANGE_PROTOCOLS_SERVER_MISC=60000,62000 # # Sets the portrange for dcache as a GSIFTP client in "active" mode # # DCACHE_PORT_RANGE_PROTOCOLS_CLIENT_GSIFTP=33115,33215

# Only change if your site has an existing D-Cache installed # To a different storage root. # DCACHE_PNFS_VO_DIR="/pnfs/${MY_DOMAIN}/data"

# Set to "yes" only if YAIM shall reset the dCache configuration, # or install DCache for the first time. # i.e. if you want YAIM to configure dCache - WARNING: # this may wipe out any dCache parameters previously configured!

# RESET_DCACHE_CONFIGURATION=no

# Set to "yes" only if YAIM shall reset the dCache nameserver, # Or install DCache for the first time. # i.e. if you want YAIM to clear the content of dCache - WARNING: # this may wipe out any dCache files previously stored! # RESET_DCACHE_PNFS=no

# Set to "yes" only if YAIM shall reset the dCache Databases, # or install DCache for the first time. # i.e. if you want YAIM to clear the metadata of dCache - WARNING: # this may wipe out any dCache files names previously stored! # Leaving your system without any way to reestablish which files # are stored. # RESET_DCACHE_RDBMS=no

###############################

- 32 - gLite 3.1기반 Belle Grid 구축 보고서

# DPM configuration variables # ###############################

# DPMDATA is now deprecated. Use an entry like $DPM_HOST:/filesystem in # the DPM_FILESYSTEMS variable. # From now on we use DPM_DB_USER and DPM_DB_PASSWORD to make clear # its different role from that of the dpmmgr user who owns the # directories and runs the daemons.

# The name of the DPM head node # DPM_HOST="" # my-dpm.$MY_DOMAIN DPM_HOST="se0.sdfarm.kr" # my-dpm.$MY_DOMAIN

# The DPM pool name (max 15 character long name) # DPMPOOL=the_dpm_pool_name DPMPOOL=NSDC-pool-0

# The filesystems/partitions parts of the pool # DPM_FILESYSTEMS="$DPM_HOST:/path1 my-dpm-poolnode.$MY_DOMAIN:/path2" DPM_FILESYSTEMS="se0.sdfarm.kr:/data"

# The database user DPM_DB_USER=dpmmgr

# The database user password DPM_DB_PASSWORD=*****

# The DPM database host DPM_DB_HOST=$DPM_HOST

# The DPM db name. Default is dpm_db # DPM_DB=dpm_db

# The DPNS db name. Default is cns_db # DPNS_DB=cns_db

# The DPM infosystem user name # DPM_INFO_USER=dpminfo

# The DPM infosystem user password # DPM_INFO_PASS=the-dpminfo-db-user-pwd

- 33 - gLite 3.1기반 Belle Grid 구축 보고서

# Specifies the default amount of space reserved for a file DPMFSIZE=1G

# Variable for the port range - Optional, default value is shown # RFIO_PORT_RANGE="20000 25000"

########### # SE_LIST # ###########

# SE_LIST="$CLASSIC_HOST $DPM_HOST $DCACHE_ADMIN" SE_LIST="$DPM_HOST" SE_MOUNT_INFO_LIST="none" # SE_ARCH="multidisk" # "disk, tape, multidisk, other" SE_ARCH="disk" # "disk, tape, multidisk, other"

################################ # BDII configuration variables # ################################

BDII_HOST=bdii.$MY_DOMAIN SITE_BDII_HOST=ce-alice.$MY_DOMAIN

BDII_SITE_TIMEOUT=120 BDII_RESOURCE_TIMEOUT=`expr "$BDII_SITE_TIMEOUT" - 5` GIP_RESPONSE=`expr "$BDII_RESOURCE_TIMEOUT" - 5` GIP_FRESHNESS=60 GIP_CACHE_TTL=300 GIP_TIMEOUT=150

# Check the validity of this URL in the documentation # BDII_HTTP_URL="http://lcg-bdii-conf.cern.ch/bdii-conf/bdii.conf" BDII_HTTP_URL="http://grid-deployment.web.cern.ch/grid-deployment/gis/lcg2- bdii/dteam/lcg2-all-sites.conf"

# The Freedom of Choice of Resources service allows a top-level BDII # to be instructed to remove VO-specific access control lines for # resources that do not meet the VO requirements # BDII_FCR=http://lcg-fcr.cern.ch:8083/fcr-data/exclude.ldif BDII_FCR="http://goc.grid-support.ac.uk/gridsite/bdii/BDII/www/bdii-update. ldif"

- 34 - gLite 3.1기반 Belle Grid 구축 보고서

# Ex.: BDII_REGIONS="CE SE RB PX VOBOX" BDII_REGIONS="CE SE RB PX LFC VOBOX FTS" # list of the services provided by the site

# The following examples are valid for gLite 3.0 # If you are configuring a 3.1 node change the port to 2170 and mds-vo-name=resource BDII_CE_URL="ldap://$CE_HOST:2170/mds-vo-name=resource,o=grid" BDII_CE01_URL="ldap://ce01.sdfarm.kr:2170/mds-vo-name=resource,o=grid" BDII_SE_URL="ldap://$DPM_HOST:2170/mds-vo-name=resource,o=grid" BDII_RB_URL="ldap://$RB_HOST:2135/mds-vo-name=local,o=grid" BDII_PX_URL="ldap://$PX_HOST:2170/mds-vo-name=resource,o=grid" BDII_WMS_URL="ldap://$WMS_HOST:2170/mds-vo-name=resource,o=grid" BDII_MON_URL="ldap://$MON_HOST:2170/mds-vo-name=resource,o=grid" BDII_LFC_URL="ldap://$LFC_HOST:2170/mds-vo-name=resource,o=grid" BDII_VOBOX_URL="ldap://$VOBOX_HOST:2170/mds-vo-name=resource,o=grid" BDII_FTS_URL="unset"

############################## # VO configuration variables # ############################## # # This file contains variables defined for the following VOs # atlas # alice # lhcb # cms # dteam # biomed # ops # # Edit the following set of variables if you want to configure a different VO: # VO__SW_DIR # VO__DEFAULT_SE # VO__STORAGE_DIR # VO__POOL_PATH (optional) # VO__VOMS_SERVERS # VO__VOMS_EXTRA_MAPS (optional) # VO__VOMSES # VO__VOMS_CA_DN

- 35 - gLite 3.1기반 Belle Grid 구축 보고서

# # If you are configuring a DNS-like VO, please check # the following URL: https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400#vo_d_directory # # IMPORTANT! Please, take into account that in the future YAIM will no longer provide VO # related variables for these VOs. This information should be obtained out of the CIC portal: # http://cic.in2p3.fr/ # # The VO variables will be automatically generated by the YAIM configurator and integrated in YAIM.

# Space separated list of supported VOs by your site # VOS="atlas alice lhcb cms dteam biomed ops" VOS="alice dteam ops belle" QUEUES=${VOS}

# For each queue define a _GROUP_ENABLE variable which is a list # of VO names and VOMS FQANs # Ex.: MYQUEUE_GROUP_ENABLE="ops atlas cms /VO=cms/GROUP=/cms/Susy" # In DNS like VO names dots and dashes shoul be replaced with underscore: # Ex.: MYQUEUE_GROUP_ENABLE="my.test-queue" # MY_TEST_QUEUE_GROUP_ENABLE="ops atlas"

# ATLAS_GROUP_ENABLE="atlas" ALICE_GROUP_ENABLE="alice" # LHCB_GROUP_ENABLE="lhcb" # CMS_GROUP_ENABLE="cms" DTEAM_GROUP_ENABLE="dteam" # BIOMED_GROUP_ENABLE="biomed" OPS_GROUP_ENABLE="ops" BELLE_GROUP_ENABLE="belle"

VO_SW_DIR=/opt/exp_soft

# Set this if you want a scratch directory for jobs EDG_WL_SCRATCH=""

#########

- 36 - gLite 3.1기반 Belle Grid 구축 보고서

# atlas # ######### # VO_ATLAS_SW_DIR=$VO_SW_DIR/atlas # VO_ATLAS_DEFAULT_SE=$DPM_HOST # VO_ATLAS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/atlas # VO_ATLAS_VOMS_POOL_PATH="/lcg1" # VO_ATLAS_VOMS_SERVERS='vomss://voms.cern.ch:8443/voms/atlas?/atlas/' # VO_ATLAS_VOMS_EXTRA_MAPS="'Role=production production' 'usatlas .usatlas'" # VO_ATLAS_VOMSES="'atlas lcg-voms.cern.ch 15001 /DC=ch/DC=cern/OU=/CN=lcg-voms.cern.ch atlas' 'atlas voms.cern.ch 15001 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch atlas'" # VO_ATLAS_VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority'" # VO_ATLAS_RBS="atlasrb1.cern.ch atlasrb2.cern.ch"

########## # alice # ########## VO_ALICE_SW_DIR=$VO_SW_DIR/alice VO_ALICE_DEFAULT_SE=$DPM_HOST VO_ALICE_STORAGE_DIR=$DPM_HOST/alice VO_ALICE_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/alice?/alice/' 'vomss://voms.cern.ch:8443/voms/alic e?/alice/'" VO_ALICE_VOMSES="'alice lcg-voms.cern.ch 15000 /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch alice' 'alice voms.cern.ch 15000 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch alice'" VO_ALICE_VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority'"

####### # cms # ####### # VO_CMS_SW_DIR=$VO_SW_DIR/cms # VO_CMS_DEFAULT_SE=$DPM_HOST # VO_CMS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/cms # VO_CMS_VOMS_SERVERS='vomss://voms.cern.ch:8443/voms/cms?/cms/' # VO_CMS_VOMSES="'cms lcg-voms.cern.ch 15002 /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch cms' 'cms voms.cern.ch

- 37 - gLite 3.1기반 Belle Grid 구축 보고서

15002 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch cms'" # VO_CMS_VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority'"

######## # lhcb # ######## # VO_LHCB_SW_DIR=$VO_SW_DIR/lhcb # VO_LHCB_DEFAULT_SE=$DPM_HOST # VO_LHCB_STORAGE_DIR=$CLASSIC_STORAGE_DIR/lhcb # VO_LHCB_VOMS_SERVERS='vomss://voms.cern.ch:8443/voms/lhcb?/lhcb/' # VO_LHCB_VOMS_EXTRA_MAPS="lcgprod lhcbprod" # VO_LHCB_VOMSES="'lhcb lcg-voms.cern.ch 15003 /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch lhcb' 'lhcb voms.cern.ch 15003 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch lhcb'" # VO_LHCB_VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority'"

######### # dteam # ######### VO_DTEAM_SW_DIR=$VO_SW_DIR/dteam VO_DTEAM_DEFAULT_SE=$DPM_HOST VO_DTEAM_STORAGE_DIR=$DPM_HOST/dteam VO_DTEAM_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/dteam?/dteam/' 'vomss://voms.cern.ch:8443/voms/dteam?/dteam/'" VO_DTEAM_VOMSES="'dteam lcg-voms.cern.ch 15004 /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch dteam' 'dteam voms.cern.ch 15004 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch dteam'" VO_DTEAM_VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority'"

####### # ops # ####### VO_OPS_SW_DIR=$VO_SW_DIR/ops VO_OPS_DEFAULT_SE=$DPM_HOST VO_OPS_STORAGE_DIR=$DPM_HOST/ops VO_OPS_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/ops?/ops/' 'vomss://voms.cern.ch:8443/voms/ops?/ops/' "

- 38 - gLite 3.1기반 Belle Grid 구축 보고서

VO_OPS_VOMSES="'ops lcg-voms.cern.ch 15009 /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch ops' 'ops voms.cern.ch 15004 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch ops'" VO_OPS_VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority'"

########## # Belle # ########## VO_BELLE_SW_DIR=$VO_SW_DIR/belle VO_BELLE_DEFAULT_SE=$DPM_HOST VO_BELLE_STORAGE_DIR=$DPM_HOST/belle VO_BELLE_VOMS_SERVERS="'vomss://voms.cc.kek.jp:8443/voms/belle/'" VO_BELLE_VOMSES="'belle voms.cc.kek.jp 15020 /C=JP/O=KEK/OU=CRC/CN=host/voms.cc.kek.jp belle'" VO_BELLE_VOMS_CA_DN="'/C=JP/O=KEK/OU=CRC/CN=KEK GRID Certificate Authority'"

- 39 -