Scientific Visualization at Jülich Supercomputing Centre
Total Page:16
File Type:pdf, Size:1020Kb
Mitglied der Helmholtz-GemeinschaftderMitglied Scientific Visualization at Jülich Supercomputing Centre 1 Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Germany Cross-Sectional-Team „Visualization“ [email protected], [email protected] Page 1 Visualization at JSC Cross-Sectional Team “Visualization” Domain-specific User Support and Research at JSC . Scientific Visualization – R&D + support for visualization of scientific data . Virtual Reality – VR systems for the analysis and presentation . Multimedia – multimedia productions for websites, presentations or on TV Deep Learning http://www.fz-juelich.de/ias/jsc/EN/Expertise/Support/Visualization/_node.html Page 2 Visualization at JSC JURECA: General Hardware Setup JURECA 6x 12x Visualization Nodes vis batch . 2 GPUs Nvidia Tesla K40 per node nodes . 12 GB RAM on each GPU 6x . 6x Vis. Login Nodes vis login nodes . jurecavis.fz-juelich.de . (jurecavis01 to jurecavis06 in round-robin fashion) 12x login . 512 GB RAM each nodes . 6x Vis. Batch Nodes InfiniBand . 4 nodes with 512 GB RAM 1872x . 2 nodes with 1024 GB RAM compute nodes . special partition: vis Data . Use vis. batch nodes via job submit. GPFS Keep in mind: Visualization is NOT limited to vis. nodes ONLY. (software rendering is possible on any node) Page 3 Visualization at JSC JUWELS: General Hardware Setup 4x Visualization Login Nodes . juwelsvis.fz-juelich.de . (juwelsvis00 to juwelsvis03 in round-robin fashion) . 768 GB RAM each . 1 GPUs Nvidia Pascal P100 per node . 12 GB RAM on GPU Vis nodes not ready to use as of today, but very soon! Page 4 Visualization at JSC General Software Setup Special Software Stack on Vis Nodes: Base Software: X-Server, X-Client (Window-Manager) OpenGL (libGL.so, libGLU.so, libglx.so), Nvidia Middleware: Virtual Network Computing: VNC-Server, VNC-Client VirtualGL Parallel and Remote Rendering Apps, In-Situ Visualization: ParaView VisIt Other Visualization Packages: VMD, PyMol, Blender, GPicView, GIMP Page 5 Visualization at JSC Usage Model for Vis Nodes JURECA projects: - Visualization possible on 6 vis login nodes - Every project gets a small contingent (1000 core h) for vis batch nodes in addition to project contingent - Contingent is valid for partition “vis” - If you need more compute time for visualization: send a request to [email protected] JUWELS projects: - Visualization will be possible on 4 vis login nodes - No visualization batch nodes - As of today, visualization software stack not production ready, but close. Available soon Non HPC-Project Users: - apply for test project - get a small contingent for vis nodes (1000 core h) Page 6 Mitglied der Helmholtz-GemeinschaftderMitglied Remote 3D Visualization 1 Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Germany Cross-Sectional-Team „Visualization“ [email protected], [email protected] Page 7 Remote 3D Visualization General Setup JURECA User’s Work- 6x vis batch station nodes 6x vis login nodes Firewall 12x login vis login node: nodes - direct user access - no accounting InfiniBand - shared with other users 1872x compute nodes - no parallel jobs (no srun) Data vis batch node: GPFS - access via batch system, partition “vis” - accounting - exclusive usage - parallel jobs possible Page 8 Remote 3D Visualization at Jülich Supercomputing Centre . X forwarding + Indirect Rendering slow, maybe incompatible bad idea . “remote aware” visualization apps (ParaView, VisIt) application dependent error-prone setup . VNC (Virtual Network Computing) + VirtualGL our recommendation good idea . Xpra - stream application content with H.264 + VirtualGL alternative recommendation good idea Page 9 Remote 3D Visualization with X Forwarding + Indirect Rendering Traditional Approach (X forwarding + Indirect Rendering) ssh –X <USERID>@<SERVER> . uses GLX extension to X Window System . X display runs on user workstation . OpenGL command are encapsulated inside X11 protocol stream . OpenGL commands are executed on user workstation . disadvantages . User´s workstation requires a running X server. User´s workstation requires a graphic card capable of the required OpenGL. User´s workstation defines the quality and speed of the visualization. User´s workstation requires all data needed to visualize the 3d scene. This approach is known to be error prone (OpenGL version mismatch, …) Try to AVOID for 3D visualization. Page 10 Remote 3D Visualization with VNC (Virtual Network Computing) + VirtualGL State-of-the-Art Approach (VNC with VirtualGL) vncserver, vncviewer . platform independent . application independent . session sharing possible . advantages . No X is required on user´s workstation (X display on server, one per session). No OpenGL is required on user´s workstation (only images are send). Quality of visualization does not depend on user´s workstation. Data size send is independent from data of 3d scene. Disconnection and reconnection possible. http://www.virtualgl.org Try to USE this approach for 3D visualization. Page 11 Remote 3D Visualization with VNC (Virtual Network Computing) + VirtualGL VNC + VirtualGL vglrun <application> . OpenGL applications send both GLX and X11 commands to the same X display. Once VirtualGL is preloaded into an OpenGL application, it intercepts the GLX function calls from the application and rewrites them. The corresponding GLX commands are then sent to the X display of the 3d X server, which has a 3D hardware accelerator attached. http://www.virtualgl.org Page 12 Remote 3D Visualization with VNC (Virtual Network Computing) + VirtualGL VNC + VirtualGL vglrun <application> . Recommended solution for any OpenGL application e.g. ParaView, VisIt, IDL, VMD, PyMol, … . Allows fast and reliable server-side hardware rendering (GPU acceleration) with VirtualGL . User only installs local VNC viewer . Desktop sharing possible . Should also be used for the frontend of “remote aware” applications (e.g. for ParaView and VisIt, …) http://www.virtualgl.org Our recommendation: Use VNC for remote rendering Page 13 Remote 3D Visualization Predefined Desktop Profiles with VNC + VirtualGL vncserver –profile vis nice blue JSC background clock counting up/down MOTD window desktop CPU, memory utilization symbols for vis apps, visualization application LLview, … GPU utilization VNC utilization Page 14 Remote 3D Visualization with VNC + VirtualGL JURECA 6x vis batch nodes 6x vis login nodes Firewall 12x login HowTo start a VNC session nodes 1. SSH to HPC system, authenticate via SSH key pair 2. Start vncserver e.g. by InfiniBand “vncserver –geometry 1920x1080 –profile vis –fg” 1872x 3. Look at the output and note the display number, compute nodes e.g. „ … desktop is jrc1386:3“ (Display 3 corresponds to TCP-port 5900+3=5903) Data 4. establish SSH tunnel to exactly this node with GPFS correct source and destination port (5903 in the example above) 5. start local VNC client and connect to remote display https://trac.version.fz-juelich.de/vis/wiki/vnc3d/manual Page 15 Mitglied der Helmholtz-GemeinschaftderMitglied Remote 3D Visualization (possible scenarios) 1 Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Germany Cross-Sectional-Team „Visualization“ [email protected], [email protected] Page 16 Visualization Scenario 1: Vis Login Node with VNC JURECA User’s Work- 6x station vis batch nodes 6x vis login nodes Firewall ssh + VNC tunnel (port 590<d>) 12x login nodes InfiniBand 1872x vis login node: compute nodes + no batch job needed, no accounting - resources shared between users Data GPFS https://trac.version.fz-juelich.de/vis Page 17 Visualization Scenario 2: Vis Batch Node with VNC JURECA User’s Work- 6x station vis batch nodes 6x vis login nodes Firewall ssh + VNC tunnel (port 590<d>) 12x login nodes InfiniBand 1872x compute nodes Data GPFS vis batch node: - batch job needed, accounting, exclusive + server can run in parallel (but number of vis nodes limited to 4) https://trac.version.fz-juelich.de/vis Page 18 Visualization Scenario 3: Vis.Login for GUI, Comp. Nodes for Server JURECA User’s Work- 6x station vis batch nodes 6x vis login nodes Firewall ssh + VNC tunnel (port 590<d>) 12x login nodes InfiniBand 1872x vis login node: compute nodes + no batch job needed, no accounting - resources shared between users Data GPFS vis batch node: - batch job needed, accounting, exclusive + server can run in parallel (but number of vis nodes limited to 4) https://trac.version.fz-juelich.de/vis Page 19 Visualization Scenario 4: Vis Login for GUI, Compute for Server JURECA User’s Work- 6x station vis batch nodes 6x vis login nodes Firewall ssh + VNC tunnel (port 590<d>) 12x login nodes InfiniBand 1872x vis login node: compute nodes + no batch job needed, no accounting - resources shared between users Data GPFS compute nodes: + vis app server can be run in parallel on a really huge number of nodes - only software rendering https://trac.version.fz-juelich.de/vis Page 20 Visualization Scenario 5: ParaView (or VisIt) without VNC JURECA User’s Work- 6x station vis batch nodes 6x vis login nodes Firewall ssh+ tunnel 12x (ParaView: port 11111) login nodes InfiniBand 1872x user workstation: compute nodes - user has to install ParaView or VisIt on his/her workstation Data GPFS vis batch node: - batch job needed, accounting, exclusive + server can run in parallel (but number of vis nodes limited to 4) https://trac.version.fz-juelich.de/vis Page 21 Mitglied der Helmholtz-GemeinschaftderMitglied Remote 3D Visualization (alternatives) 1 Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Germany