Computing problems Sandeep Bhowmik Researcher at NICPB, Tallinn, Estonia February 22, 2019 NICPB Tallinn Hadoop, HDFS and FUSE • Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. • The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part which is a MapReduce programming model. • The Hadoop distributed file system (HDFS) is a distributed, scalable, and portable file system written in Java for the Hadoop framework. • HDFS can be mounted directly with a Filesystem in Userspace (FUSE) virtual file system on Linux and some other Unix systems. • The hadoop-hdfs-fuse package enables you to use your HDFS cluster as if it were a traditional filesystem on Linux. Sandeep Bhowmik February 22, 2019 !2 Overview The File System (FS) shell includes various shell-like commands that directly interact with the Hadoop Distributed File System (HDFS) as well as other file systems that Hadoop supports, such as Local FS, HFTP FS, S3 FS, and others. • hadoop fs -ls It can point to any file system • hadoop dfs -ls Now it is deprecated • hdfs dfs -ls It can be used to execute operations on HDFS Sandeep Bhowmik February 22, 2019 !3 Hadoop HDFS Commands Command HDFS Command Action List all the files/directories for the given hdfs ls hdfs dfs -ls / destination path. This command will display the content of the HDFS cat hdfs dfs -cat /hadoop/test file test on your stdout. hdfs dfs -appendToFile Appends the content of a local file test1 to a hdfs append /home/test1 /hadoop/test2 file test2. hdfs dfs -put put Copies the file from local file system to HDFS. /home/sample /hadoop hdfs dfs -copyFromLocal Works similarly to the put command, except that /home/sample /hadoop the source is restricted to a local file reference. hdfs dfs -copyToLocal Works similarly to the put command, except that /newfile /home/ therestricted destination to a local is restricted file reference. to a local file reference. hdfs dfs -get get Copies the file from HDFS to local file system. /newfile /home/ restricted to a local file reference. hdfs dfs -cp Copies file from source to destination on HDFS. In this case, cp copying file1 from hadoop directory to hadoop1 directory. /hadoop/file1 /hadoop1 rm hdfs dfs -rm /hadoop/file1 Deletesfile1 from thehadoop file (sendsdirectory it to to hadoop1 the trash). directory. mkdir hdfs dfs -mkdir /hadoop Create a directory in specified HDFS location. Sandeep Bhowmik February 22, 2019 !4 Problem according to Lauri it crashes, because it is old centos6 and it has old fuse/kernel/hadoop lib or/and all of you are trying to access it and read/list thousands files/dir and even with ls --color mode on root 6790 0.0 0.0 116220 2080 ? D Jan28 0:00 ls -la /hdfs/ ssawant 9030 0.0 0.0 115764 764 ? D Jan28 0:00 ls --color=auto /hdfs/local/ssawant/hhAnalysis/2017/ ram 12332 0.0 0.0 115796 2124 ? D 19:16 0:00 ls --color=auto -lt /hdfs/local/ram/ karl 12561 0.0 0.0 115764 768 ? D Jan27 0:00 ls --color=auto /hdfs acaan 14311 0.0 0.0 115772 844 ? D Jan26 0:00 ls --color=auto /hdfs/local/acaan/ttHAnalysis/2017/ inclusive_toNN_2017Dec18/histograms/inclusive_toNN/forBDTtraining acaan 18089 0.0 0.0 115772 772 ? D Jan26 0:00 ls --color=auto /hdfs/local/acaan/ttHAnalysis/2017/ inclusive_toNN_2017Dec18/histograms/inclusive_toNN/forBDTtraining acaan 19794 0.0 0.0 115772 816 ? D 19:41 0:00 ls --color=auto /hdfs root 22287 0.0 0.0 116220 2128 pts/54 D+ 19:50 0:00 ls -la /hdfs/ acaan 23133 0.0 0.0 115772 848 ? D Jan26 0:00 ls --color=auto /hdfs/ karl 42188 0.0 0.0 18952 800 ? D Jan26 0:00 ls --color=auto /hdfs acaan 51873 0.0 0.0 115772 816 ? D Jan27 0:00 ls --color=auto /hdfs/ ram 53976 0.0 0.0 115796 2096 ? D Jan28 0:00 ls --color=auto -lt /hdfs/local/ram/ Sandeep Bhowmik February 22, 2019 !5 Use of ls -l python/templates/sbatch-node.produce.sh.template Listing of current directory: python/templates/sbatch-node.hadd.sh.template Contents of temporary job dir: See output file in output directory Here we use Hadoop command for /hdfs area python/templates/sbatch-node.sh.template Listing of current directory: Sandeep Bhowmik February 22, 2019 !6 Use of os.path.exists python/jobTools.py Generates input file list for each job: python/sbatchManagerTools.py def is_file_ok(output_file_name, validate_outputs = True, min_file_size = 20000): Checks if output file exists with size more than 20000 or not Sandeep Bhowmik February 22, 2019 !7 Use of os.path.isfile It is used in many places Sandeep Bhowmik February 22, 2019 !8 Suggestions by Lauri this automotic script or other script is misbehaving and crashing hdfs mount hdfs dfs -ls is multiple times faster if dir contains a lot of files/dir hdfs dfs access hdfs directly local ls access it over fuse hdfs fuse implantation cannot handle so much load https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/ FileSystemShell.html seems like there is hdfslib for python also https://hdfscli.readthedocs.io/en/latest/ Sandeep Bhowmik February 22, 2019 !9 Suggestions by Karl It's not the only place where we use FUSE, though e.g. when we check if a target file exists: https://github.com/HEP-KBFI/tth-htt/blob/master/python/jobTools.py#L104 well.. in principle we can, but we cannot completely switch off FUSE because: 1) our makefile approach relies on stat'ing paths starting with /hdfs -- I don't think it's possible to provide a user-defined stat command for the make program 2) last time I tested, I found that opening ROOT files via THDFS plugin that I wrote is prohibitive (I think it had something to do with the limit on the number of threads in JRE or something) what we could do is to: 1) replace all os.path.* commands with some wrappers that check whether a path starts with /hdfs; if it does, then invoke hdfs commands; otherwise use the standard commands 2) similarly in c++, replace all boost::filesystem:: functions with wrappers 3) we also need a wrapper for writing run:lumi:event number files to /hdfs It's a compromise about 3: maybe it would be easier to produce the run:lumi:event file "locally" and copy to /hdfs with the hdfs command (as we already do with the ROOT files that the analysis jobs output) This way we wouldn't need to write any special c++ code for writing a text file onto /hdfs Sandeep Bhowmik February 22, 2019 !10 HdfsCLI https://hdfscli.readthedocs.io/en/latest/index.html from hdfs import InsecureClient client = InsecureClient('http://host:port', user='ann') # Loading a file in memory. with client.read('features') as reader: features = reader.read() Sandeep Bhowmik February 22, 2019 !11 Native Hadoop file system (HDFS) connectivity in Python http://wesmckinney.com/blog/python-hdfs-interfaces/ There have been many Python libraries developed for interacting with the Hadoop File System, HDFS, via its WebHDFS gateway as well as its native Protocol Buffers-based RPC interface. I'll give you an overview of what's out there and show some engineering I've been doing to offer a high performance HDFS interface within the developing Arrow ecosystem. from pyarrow import HdfsClient # Using libhdfs hdfs = HdfsClient(host, port, username, driver='libhdfs') # Using libhdfs3 hdfs_alt = HdfsClient(host, port, username, driver='libhdfs3') with hdfs.open('/path/to/file') as f: ... Sandeep Bhowmik February 22, 2019 !12 Saving to HDFS with native HDFS client http://phillywiki.azurewebsites.net/articles/Writing_module_file_with_native_hdfs_client.html hdfs-mount is easy to use. However, when saving models or checkpoints to HDFS, hdfs- mount has a quality gap, comparing to native HDFS client, which can more reliabily follow HDFS protocol in writing data. For example, whenever Philly HDFS Storage is near its capacity, we observe many user jobs report Input/Output errors when saving models or checkpoints to HDFS storage. Therefore native HDFS client option is recommended in saving data to HDFS, in PhillyOnPrem and PhillyOnAP clusters. import os import subprocess import tempfile def convert_to_tmpPath(filepath): tmpfolder = tempfile.gettempdir() if filepath.startswith('/hdfs/'): filepath = filepath.replace('/hdfs/', '', 1) tmppath = os.path.join(tmpfolder, filepath) dir = os.path.dirname(tmppath) if not os.path.exists(dir): os.makedirs(dir) return tmppath # write to local disks firstly filepath = '/hdfs/pnrsy/test/hello.txt' tmpfilepath = convert_to_tmpPath(filepath) with open(tmpfilepath, 'w') as f: f.write('hello') Sandeep Bhowmik February 22, 2019 !13 Optimizing Mountable HDFS By default, the CDH 5 package installation creates the /etc/default/ hadoop-fuse file with a maximum heap size of 128 MB. You might need to change the JVM minimum and maximum heap size for better performance Now we have export LIBHDFS_OPTS="-Xms128m -Xmx512m” https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cdh_ig_hdfs_mountable.html The maximum heap limit is about 2 GB (2048MB). Sandeep Bhowmik February 22, 2019 !14 Conclusions • Hadoop command is already used instead of normal ls • DO we need to replace hadoop fs with hdfs dfs ? • Do we need to replace all python command also with hdfs ? • Increasing maximum heap size will help ? Sandeep Bhowmik February 22, 2019 !15.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-