by

Ansys Software 14.1 Install

ANSYS software (Version 15.0.7, Ansys Inc., Canonsburg, PA, USA); and, finally, compare the corresponding stresses to confirm the resul ts. The analytical solution procedure will take two steps. The ANSYS Student software is now installed. Installation steps for Ansys LS-DYNA Student software: Right-click on Ansys2020R2LS-DYNAStudent12.0.0.msi and select “Install” Read and accept the clickwrap to continue. Click the right arrow.

Overview

Modules

A large number of popular software packagesare installed on Palmetto and can be used withoutany setup or configuration.These include:

ShareAppsCrack is a blog to sharing software, applications, course and game. For Developer, Engineer, Designer, Student and everyone need. Canon 6d software, free download Nepali Font Preeti Corel Products Keygen X Force Realtek Ac97 Audio Driver Arcgis License Server Administrator 10.3 Poi Game Download Fruity Loops 10 Full Version Ansys Software 14.1 Install Happy Birthday Song Instrumental. Fluent ansys workbench ansys tutorial ansys student ansys discovery. Delf a2 junior ansys Download Ansys 14 32 bit with crack torrent Software Other Sep 21, 2014. Ansys fluent descargar crack. Visit this website and download cracked version of Ansys 19 But one thing change the.

  • Compilers (such as gcc, Intel, and PGI)
  • Libraries (such as OpenMPI, HDF5, Boost)
  • Programming languages (such as Python, MATLAB, R)
  • Scientific applications (such as LAMMPS, Paraview, ANSYS)
  • Others (e.g., Git, PostgreSQL, Singularity)

These packages are available as modules on Palmetto.The following commands can be used to inspect, activateand deactivate modules:

CommandPurpose
module availList all packages available (on current system)
module add package/versionAdd a package to your current shell environment
module listList packages you have loaded
module rm package/versionRemove a currently loaded package
module purgeRemove all currently loaded packages

See the Quick Start Guidefor more details about modules.

Licensed software

Many site-licensed software packages are available on Palmetto cluster(e.g., MATLAB, ANSYS, COMSOL, etc.,).There are limitations on the number of jobs that can run using these packages.See this sectionof the User's Guide on how to check license usage.

Individual-owned or group-owned licensed software can also be run on Palmetto.

Software with graphical applications

See this sectionof the User's Guide on how to use software with graphical user interface (GUI).

Installing your own software

See this sectionof the User's Guide on how to check license usage.

ABAQUS

ABAQUS is a Finite Element Analysis software usedfor engineering simulations.Currently, ABAQUS versions 6.10, 6.13, 6.14 are available on Palmetto clusteras modules.

To see license usage of ABAQUS-related packages,you can use the lmstat command:

Running ABAQUS interactive viewer

To run the interactive viewer,you must log-in with tunneling enabled,and then ask for an interactive session:

Once logged-in to an interactive compute node,to launch the interactive viewer,load the abaqus module, and run the abaqus executable with the viewer and -mesa options:

Similarly,to launch the ABAQUS CAE graphical interface:

Running ABAQUS in batch mode

To run ABAQUS in batch mode on Palmetto cluster,you can use the job script in the following example as a template.This example shows how to run ABAQUS in parallel using MPI.This demonstration runs the 'Axisymmetric analysis of bolted pipe flange connections'example provided in the ABAQUS documentation here.Please see the documentation for the physics and simulation details.You can obtain the files required to run this exampleusing the following commands:

The .inp files describe the model and simulation to be performed - seethe documentation for details.The batch script job.sh submits the job to the cluster.The .env file is a configuration file that must be included in allABAQUS job submission folders on Palmetto.

In the batch script job.sh:

  1. The following line extracts the total number of CPU cores available across all the nodes requested by the job:

~~~ NCORES=wc -l $PBS_NODEFILE gawk '{print $1}' ~~~

  1. The following line runs the ABAQUS program, specifying various options such as the path to the .inp file, the scratch directory to use, etc.,

~~~ abaqus job=abdemo double input=/scratch2/$USER/ABAQUS/boltpipeflange_axi_solidgask.inp scratch=$SCRATCH cpus=$NCORES mp_mode=mpi interactive ~~~

To submit the job:

After job completion, you will see the job submission directory (/scratch2/username/ABAQUS)populated with various files:

If everything went well, the job output file (AbaqusDemo.o9668628) should look like this:

The output database (.odb) filecontains the results of the simulation which can be viewedusing the ABAQUS viewer:

ANSYS

Graphical Interfaces

To run the various ANSYS graphical programs,you must log-in with tunneling enabledand then ask for an interactive session:

Once logged-in to an interactive compute node,you must first load the ANSYS module along with the Intel module:

And then launch the required program:

For ANSYS APDL

If you are using e.g., ANSYS 20.2 instead, then the executable is called ansys202.

For CFX

For ANSYS Workbench

For Fluent

For ANSYS Electromagnetics (only available for ansys/20.2)

Batch Mode

To run ANSYS in batch mode on Palmetto cluster,you can use the job script in the following example as a template.This example shows how to run ANSYS in parallel (using multiple cores/nodes).In this demonstration, we model the strain in a 2-D flat plate.You can obtain the files required to run this exampleusing the following commands:

The input.txt batch file is generated for the model using the ANSYS APDL interface.The batch script job.sh submits the batch job to the cluster:

In the batch script job.sh:

  1. The following line extracts the nodes (machines) available for this job as well as the number of CPU cores allocated for each node:

~~~ machines=$(uniq -c $PBS_NODEFILE awk '{print $2':'$1}' tr 'n' :) ~~~

  1. For ANSYS jobs, you should always use $TMPDIR (/local_scratch) as the working directory. The following lines ensure that $TMPDIR is created on each node:

~~~ do ssh $node 'sleep 5' ssh $node 'cp input.txt $TMPDIR' done ~~~

  1. The following line runs the ANSYS program, specifying various options such as the path to the input.txt file, the scratch directory to use, etc.,

~~~ ansys195 -dir $TMPDIR -j EXAMPLE -s read -l en-us -b -i input.txt -o output.txt -dis -machines $machines -usessh ~~~

  1. Finally, the following lines copy all the data from $TMPDIR:

~~~ do ssh $node 'cp -r $TMPDIR/* $PBS_O_WORKDIR' done ~~~

To submit the job:

After job completion, you will see the job submission directory (/scratch2/username/ANSYS)populated with various files:

If everything went well, the job output file (ANSYSdis.o9752784) should look like this:

The results file (EXAMPLE.rst)contains the results of the simulation which can be viewedusing the ANSYS APDL graphical interface:

COMSOL

Ansys software download

COMSOL is an application for solving Multiphysics problems.To see the available COMSOL modules on Palmetto:

To see license usage of COMSOL-related packages,you can use the lmstat command:

Graphical Interface

To run the COMSOL graphical interface,you must log-in with tunneling enabled,and then ask for an interactive session:

Once logged-in to an interactive compute node,to launch the interactive viewer,you can use the comsol command to run COMSOL:

The -np option can be used to specify the number ofCPU cores to use.Remember to always use $TMPDIR asthe working directory for COMSOL jobs.

Batch Mode

To run COMSOL in batch mode on Palmetto cluster,you can use the example batch scripts below as a template.The first example demonstrates running COMSOL using multiple coreson a single node,while the second demonstrates running COMSOL across multiple nodesusing MPI.You can obtain the files required to run this exampleusing the following commands:

Both of these examples run the'Heat Transfer by Free Convection' application describedhere.In addition to the job.sh and job_mpi.sh scripts, to run the examples and reproduce the results,you will need to download the file free_convection.mph (choose the correct version) providedwith the description (login required).

COMSOL batch job on a single node, using multiple cores:

COMSOL batch job across several nodes

GROMACS

  • Gromacs is an architecture-specific software. It performs best when installed and configured on the specific hardware.
  • To simplify the process of setting up gromacs, we recommend that you set up your local spack using instructions from the following links.

  • Get a node (choose the node type you wish to run Gromacs on)

  • Identify architecture type:

Select the crrect architecture based on the CPU model:

  • E5-2665: sandybridge
  • E5-2680: sandybridge
  • E5-2670v2: ivybridge
  • E5-2680v3: haswell
  • E5-2680v4: broadwell
  • 6148G: skylake
  • 6252G: cascadelake
  • 6238R: cascadelake

In this example, given the previous qsub command, we most likely will get a broadwell node:

Installing cuda

You should remember the hash value of the cuda installation for later use.

Installing fftw

Installing gromacs

Gromacs will now be available in your local module path

Running GROMACS interactively

As an example,we'll consider running the GROMACS ADH benchmark.

First, request an interactive job:

After the last command above completes,the .edr and .log files produced by GROMACS should be visible.Typically, the next step is to copy these results to the output directory:

Running GROMACS in batch mode

The PBS batch script for submitting the above is assumed to be inside /scratch1/$USER/gromacs_ADH_benchmark, and this directory already contains the input files:

HOOMD

Run HOOMD

Now the HOOMD-BLUE v2.3.5 has been installed. Create a simple python file “test_hoomd.py” to run HOOMD

If you have logged out of the node, request an interactive session on a GPU node and add required modules:

Run the script interactively: Echoboy plugin free download mac.

Alternatively, you can setup a PBS job script to run HOOMD in batch mode. A sample is below for Test_Hoomd.sh:

Submit the job:

This is it

Java

The Java Runtime Environment (JRE) version 1.6.0_11 is currently available cluster-wide on Palmetto. If a user needs a different version of Java, or if the Java Development Kit (JDK, which includes the JRE) is needed, that user is encouraged to download and install Java (JRE or JDK) for herself. Below is a brief overview of installing the JDK in a user's /home directory.

JRE vs. JDK

The JRE is basically the Java Virtual Machine (Java VM) that provides a platform for running your Java programs. The JDK is the fully featured Software Development Kit for Java, including the JRE, compilers, and tools like JavaDoc and Java Debugger used to create and compile programs.

Usually, when you only care about running Java programs, the JRE is all you'll need. If you are planning to do some Java programming, you will need the JDK.

Downloading the JDK

The JDK cannot be downloaded directly using the wget utility because a user must agree to Oracle's Java license ageement when downloading. So, download the JDK using a web browser and transfer the downloaded jdk-7uXX-linux-x64.tar.gz file to your /home directory on Palmetto using scp, sftp, or FileZilla:

Installing the JDK

The JDK is distributed in a Linux x86_64 compatible binary format, so once it has been unpacked, it is ready to use (no need to compile). However, you will need to setup your environment for using this new package by adding lines similar to the following at the end of your ~/.bashrc file:

Once this is done, you can log-out and log-in again or simply source your ~/.bashrc file and then you'll be ready to begin using your new Java installation.

Julia

Julia: high-level dynamic programming language that was originally designed to address the needs of high-performance numerical analysis and computational science.

Run Julia in Palmetto: Interactive

There are a few different versions of Julia available on the cluster.

Let demonstrate how to use julia/1.1.1 in the Palmetto cluster together with Gurobi Optimizer (a commercial optimization solver for linear programming),quadratic programming, etc. Clemson University has different version of licenses for Gurobi solver.In this example, I would like to use Julia and Gurobi solver to solve a linear math problem using Palmetto HPC

Let prepare a script to solve this problem, named: jump_gurobi.jl.You can save this file to: /scratch1/$username/Julia/

Then type/copy the following code to the file jump_gurobi.jl

Save the jump_gurobi.jl file then you are ready to run julia:

Run Julia in Palmetto: Batch mode

  • Alternatively, you can setup a PBS job script to run Julia in batch mode. A sample is below for submit_julia.sh:You must install the JuMP and Gurobi package first (one time installation)

Submit the job:

$ qsub submit_julia.sh

The output file can be found at the same folder: output_JuMP.txt

Install your own Julia package using conda environment and running in Jupyterhub

In addition to traditional compilation of Julia, it is possible to install your own version of Julia and setup kernel to work using Jupterhub.

Exit Julia and Start Jupyterhub in PalmettoAfter spawning, Click on New kernel in Jupyterhub, you will see Julia 1.1.1 kernel available for use

Type in the follwing code to test:

LAMMPS

There are a few different versions of LAMMPS available on the cluster.

ICOM NEXT is professional diagnostic tool for all BMW E/F/G series cars, MINI, Rolls-Royce BMW-Model. And it supports programming offline directly. BMW ICOM software works with ICOM NEXT well, also ICOM A2. VXDAS.COM here provide BMW Win7 ISTA-D/P software download and installation guide. BMW ICOM Software Overview: Software Version: V2018.12. Download and Set up INPA for BMW F series coding Posted on February 9, 2017 by sales Yes, INPA software works with BMW Fxx Chassis, so long as you have the Fxx.IPO files installed. BMW Standard Tools – The software suite distributed by BMW Group that contains several interoperating applications and drivers, including NCS Expert, WinKFP, NFS, INPA, and others. The following programs will be installed or updated. Newest BMW E-sys 3.33.0 3.32.1 Free Download; Free Download Op-com Can OBD2 FW 1.99 Opel Diagnostic Tool; Free download Diagbox V9.12 V7.83 for Lexia 3 PP2000 Free download Renault Can Clip V191 V190 V188 V185 V184 V183; New Diagbox 9.23 Installation: Windows 10 or Windows ETKA 8.1 Electronic Parts Catalogue Download FREE. Bmw winkfp download.

Note: letter P, K, V stand for GPU pascal (p100), kepler (k20, k40) and volta (v100)

Installing custom LAMMPS on Palmetto

LAMMPS comes with a wide variety of supported packages catering to different simulation techniques. It is possible to build your own LAMMPS installation on Palmetto. We discuss two examples below, oneusing kokkos with GPU, one using kokkos without GPU support.

Reserve a node, and pay attention to its GPU model.

Create a directory named software (if you don't already have it) in your home directory, and change to that directory.

  • Create a subdirectory called lammps inside software.
  • Download the latest version of lammps and untar.
  • In this example, we use the 20200721, the latest dated version of lammps. You can set up spack (see User Software Installation), then runs spack info lammps to see the latest recommended version of lammps.
  • You can change the name of the untarred directory to something easier to manage.

In the recent versions, lammps use cmake as their build system. As a result, we will be able to build multiple lammps executables within a single source download.

Lammps build with kokkos and gpu

  • Create a directory called build-kokkos-cuda
  • Change into this directory.

In building lammps, you will need to modify two cmake files, both inside ./cmake/presets/ directory (this is arelative path assuming you are inside the previously created build-kokkos-cuda). A set of already prepared cmake templates are available inside ./cmake/presets, but you will have to modify them. It is recommended that you use./cmake/presets/minimal.cmake and ./cmake/presets/kokkos-cuda.cmake as starting points.

For add-on simulation packages, make a copy of ./cmake/presets/minimal.cmake, and use ./cmake/presets/all_on.cmake as a reference point to see what is needed. Let's say we want user-meamc and user-fep in addition to what's in minimal.cmake for simulation techniques. We also need to inlcude kokkos.

Use your favorite editor to add the necessary package names (in capitalized form) to my.cmake. Check the contents afterward.

  • Next, we need to modify ./cmake/presets/kokkos-cuda.cmake so that kokkos is built to the correctarchitectural specification. For Palmetto, the follow
Palmetto GPU architecturesArchitecture name for Kokkos
K20 and K40KEPLER35
P100PASCAL60
V100 and V100SVOLTA70
  • Since we specified v100 in the initial qsub, ./cmake/presets/kokkos-cuda.cmake will need to bemodified to use VOLTA70.
  • We will need to load three supporting modules from Palmetto. We will loadmodules that have been compiled for the specific architecture of v100 nodes.
  • Build and install
  • Test on LAMMPS's LJ data

Running LAMMPS - an example

Several existing examples are in the installed folder: lammps-7Aug19/examples/Detailes description of all examples are here.

We run an example accelerate using different packageHere is a sample batch script job.sh for this example:

Lammps build with kokkos and gpu

This is a bit similar to the build with kokkos and gpu. In a non-gpu build, kokkos willhelp manage the OpenMP threads, and the corresponding make file is ./cmake/presets/kokkos-openmp.make

Several way to run LAAMPS

For more information on comparison between various accelerator packages please visit:https://lammps.sandia.gov/doc/Speed_compare.html

MATLAB

Checking license usage for MATLAB

You can check the availability of MATLAB licensesusing the lmstat command:

Running the MATLAB graphical interface

To launch the MATLAB graphical interface on a PC/Windows machine, you should use MobaXTerm, and then ask for an interactive session with a -X option:

Once logged-in, you must load one of the MATLAB modules:

And then launch the MATLAB program:

Mac users can run Matlab graphical interface through VNC.

Warning: DO NOT attempt to run MATLAB right afterlogging-in (i.e., on the login001 node).Always ask for an interactive job first.MATLAB sessions are automatically killed on the login node.

Running the MATLAB command line without graphics

To use the MATLAB command-line interface without graphics,you can request a compute node, load the Matlab module, and then use the -nodisplay and -nosplash options:

This will work on both Windows and Mac platforms. To quit matlab command-line interface, type:

MATLAB in batch jobs

Ansys

To use MATLAB in your batch jobs,you can use the -r switch provided by MATLAB,which lets you run commands specified on the command-line.For example:

will run the MATLAB script myscript.m,Or:

will run the MATLAB script myscript.m and write the output to myscript_results.txt file.Thus, an example batch job using MATLAB could havea batch script as follows:

Compiling MATLAB code to create an executable

Often, you need to run a large number of MATLAB jobsconcurrently (e.g,m each job operating on different data).In such cases, you can avoid over-utilizing MATLAB licensesby compiling your MATLAB code into an executable.This can be done from within the MATLAB command-line as follows:

Note: MATLAB will try to use all the available CPU coreson the system where it is running, and this presents a problemwhen your compiled executable on the cluster where availablecores on a single node might be shared amongst multiple users.You can disable this 'feature' when you compile your code byadding the -R -singleCompThread option, as shown above.

The above command will produce the executable mycode, correspondingto the M-file mycode.m. If you have multiple M-files in your projectand want to create a single executable, you can usea command like the following:

Once the executable is produced,you can run it like any other executable in your batch jobs.Of course, you'll also need the same matlab loaded for your job's runtime environment.

Paraview

Using Paraview+GPUs to visualize very large datasets

Paraview can also use multiple GPUs on Palmetto clusterto visualize very large datasets.For this, Paraview must be run in client-server mode.The 'client' is your local machine on which Paraview must be installed,and the 'server' is the Palmetto cluster on which the computations/rendering is done.

  1. The version of Paraview on the client needs to matchexactly the version of Paraview on the server.The client must be running Linux.You can obtain the source code used for installation of Paraview 5.0.1on Palmetto from /software/paraview/ParaView-v5.0.1-source.tar.gz.Copy this file to the client, extract it and compile Paraview.Compilation instructions can be found in the Paraviewdocumentation.

  2. You will need to run the Paraview server on Palmetto cluster.First, log-in with X11 tunneling enabled, and request an interactive session:

In the above example, we request 4 nodes with 2 GPUs each.

  1. Next, launch the Paraview server:

The server will be serving on a specific port number (like 11111)on this node. Note this number down.

  1. Next, you will need to set up 'port-forwarding' from the lead node(the node your interactive session is running one) to your local machine.This can be done by opening a terminal running on the local machine,and typing the following:
  1. Once port-forwarding is set up,you can launch Paraview on your local machine,

Pytorch

This page explains how to install the Pytorchpackage for use with GPUs on the cluster,and how to use it from Jupyter Notebook via JupyterHub.

GPU node

1) Request an interactive session on a GPU node. For example:

2) Load the Anaconda module:

3) Create a conda environment called pytorch_env (or any name you like):

4) Activate the conda environment:

5) Install Pytorch with GPU support from the pytorch channel:

This will automatically install some packages that are required for Pytorch, like MKL or NumPy. To see the list of installed packages, type

If you need additional packages (for example, Pandas), you can type

6) You can now run Python and test the install:

Each time you login, you will first need to load the required modulesand also activate the pytorch_env conda environment beforerunning Python:

Add Jupyter kernel:

If you would like to use Pytorch from Jupyter Notebook on Palmetto viaJupyterHub, you need the following additional steps:

1) After you have installed Pytorch, install Jupyter in the same conda environment:

2) Now, set up a Notebook kernel called 'Pytorch'. For Pytorch with GPU support, do:

3) Create/edit the file .jhubrc in your home directory:

Add the following two lines to the .jhubrc file, then exit.

4) Log into JupyterHub. Make sure you have GPU in yourselection if you want to use the GPU pytorch kernel

5) Once your JupyterHub has started, you should see the Pytorch kernel in your list of kernelsin the Launcher.

6) You are now able to launch a notebook using the pytorch with GPU kernel:

rclone

rclone is a command-line program that can be usedto sync files and folders to and from cloud servicessuch as Google Drive, Amazon S3, Dropbox, and many others.

In this example,we will show how to use rclone to sync files to a Google Drive account,but the official documentation has specific instructions for other services.

Setting up rclone for use with Google Drive on Palmetto

To use rclone with any of the above cloud storage services,you must perform a one-time configuration.You can configure rclone to work with as many services as you like.

For the one-time configuration, you will need tolog-in with tunneling enabled.Once logged-in, ask for an interactive job:

Once the job starts, load the rclone module:

After rclone is loaded, you must set up a 'remote'. In this case,we will configure a remote for Google Drive. You can create and manage a separateremote for each cloud storage service you want to use.Start by entering the following command:

Hit n then Enter to create a new remote host

Provide any name for this remote host. For example: gmaildrive

Provide any number for the remote source. For example choose number 2 for goolge drive.

Use y if you are not sure

This will open the Firefox web browser, allowing you to log-in to your Google account.Enter your username and password then accept to let rclone access your Goolge drive.Once this is done, the browser will ask you to go back to rclone to continue.

Select y to finish configure this remote host.The gmaildrive host will then be created.

After this, you can quit the config using q, kill the job and exit this ssh session:

Using rclone

Whenever transfering files (using rclone or otherwise), login to the transfer node xfer01-ext.palmetto.clemson.edu.

  • In MobaXterm, create a new ssh session with xfer01-ext.palmetto.clemson.edu as the Remote host
  • In MacOS, open new terminal and ssh [email protected]

Once logged-in, load the rclone module:

You can check the content of the remote host gmaildrive:

You can use rclone to (for example) copy a file from Palmetto to any folder in your Google Drive:

Or if you want to copy to a specific destination on Google Drive back to Palmetto:

Additional rclone commands can be found here.

Tensorflow

This page explains how to install the Tensorflowpackage for use with GPUs on the cluster,and how to use it from Jupyter Notebook via JupyterHub.

Installing Tensorflow GPU node

1) Request an interactive session on a GPU node. For example:

2) Load the Anaconda module:

3) Create a conda environment called tf_env (or any name you like):

4) Activate the conda environment:

5) Install Tensorflow with GPU support from the anaconda channel:

This will automatically install some packages that are required for Tensorflow, like SciPy or NumPy. To see the list of installed packages, type

If you need additional packages (for example, Pandas), you can type

6) You can now run Python and test the install:

Each time you login, you will first need to load the required modulesand also activate the tf_env conda environment beforerunning Python:

Installing Tensorflow for non-GPU node

1) Request an interactive session for non-GPU node. For example:

2) Load the required modules

3) Create a conda environment called tf_env_cpu:

4) Activate the conda environment:

5) Install Tensorflow from the anaconda channel:

This will automatically install some packages that are required for Tensorflow, like SciPy or NumPy. To see the list of installed packages, type

If you need additional packages (for example, Pandas), you can type

6) You can now run Python and test the install:

Setup Jupyter kernel

If you would like to use Tensorflow from Jupyter Notebook on Palmetto viaJupyterHub, you need the following additional steps:

1) After you have installed Tensorflow, install Jupyter in the same conda environment:

2) Now, set up a Notebook kernel called 'Tensorflow'. For Tensorflow with GPU support, do:

Ansys Software For Windows 10

For Tensorflow without GPU support, do:

3) Create/edit the file .jhubrc in your home directory:

Add the following two lines to the .jhubrc file, then exit.

4) Log into JupyterHub. Make sure you have GPU in yourselection if you want to use the GPU tensorflow kernel

5) Once your JupyterHub has started, you should see the Tensorflow kernel in your list of kernelsin the Launcher.

6) You are now able to launch a notebook using the tensorflow with GPU kernel:

For Tensorflow with GPU support, you can check whether it is detecting the GPU properly with the line

Example Deep Learning - Multiple Object Detections

This is a demonstration for the tensorflow gpu kernel. Steps fornon-gpu kernel are similar.

1) Request an interactive session on a GPU node. For example:

2) Load the Anaconda module:

3) Activate the conda environment:

4) Install supporting conda modules:

5) Setup TensorFlow's Model directory:

Open Jupyter Server, change into the tensorflow directory, then open and runthe multi_object.ipynb notebook.

Singularity

Singularityis a tool for creating and runningcontainerson HPC systems,similar to Docker.

For further information on Singularity,and on downloading, building and running containers with Singularity,please refer to the Singularity documentation.This page provides information about singularity specific to the Palmetto cluster.

Running Singularity

Singularity is installed on all of the Palmetto compute nodes and the Palmetto LoginVMs, but it IS NOT present on the login.palmetto.clemson.edu node.

To run singularity, you may simply run singularity or more specifically /bin/singularity.

e.g.

An important change for existing singularity users

Formerly, Palmetto administrators had installed singularity as a 'software module' on Palmetto, but that is no longer the case. If your job scripts have any statements that use the singularity module, then those statements will need to be completely removed; otherwise, your job script may error.

Remove any statements from your job scripts that resemble the following lines:

Where to download containers

Containers can be downloaded from DockerHub

  • DockerHubcontains containers for various software packages,and Singularity iscompatible with Docker images.

  • The NVIDIA GPU Cloud for GPU-optimized images.

  • Many individual projects contain specific instructions for installation viaDocker and/or Singularity, and may host pre-built images in other locations.

Example: Running OpenFOAM using Singularity

As an example, we consider installing and running the OpenFOAM CFD solver using Singularity.OpenFOAM can be quite difficult to install manually,but singularity makes it very easy.This example shows how to use singularity interactively,but singularity containers can be run in batch jobs as well.

Start by requesting an interactive job.

NOTE: Singularity can only be run on the compute nodes and Palmetto Login VMs:

We recommend that all users store built singularity imagesin their /home directories.Singularity images can be quite large,so be sure to delete unused or old images:

Next, we download the singularity image for OpenFOAM from DockerHub.This takes a few seconds to complete:

Once the image is downloaded,we are ready to run OpenFOAM.We use singularity shell to start a container,and run a shell in the container.

The -B option is used to 'bind' the /scratch2/$USER directoryto a directory named /scratch in the container.

We also the --pwd option to specify the working directory in the running container(in this case /scratch).This is always recommended.

Typically, the working directory may be the $TMPDIR directory orone of the scratch directories.

Before running OpenFOAM commands, we need to source a few environment variables(this step is specific to OpenFOAM):

Now, we are ready to run a simple example using OpenFOAM:

The simulation takes a few seconds to complete,and should finish with the following output:

We are now ready to exit the container:

Because the directory /scratch was bound to /scratch2/$USER, the simulation output is available inthe directory /scratch2/$USER/pitzDaily/postProcessing/:

GPU-enabled software using Singularity containers (NVIDIA GPU Cloud)

Palmetto also supports use of images provided by the NVIDIA GPU Cloud (NGC).

The provides GPU-accelerated HPC and deep learning containers for scientific computing.NVIDIA tests HPC container compatibility with the Singularity runtime through a rigorous QA process.

Pulling NGC images

Singularity images may be pulled directly from the Palmetto GPU compute nodes,an interactive job is most convenient for this.Singularity uses multiple CPU cores when building the imageand so it is recommended that a minimum of 4 CPU cores are reserved.For instance to reserve 4 CPU cores, 2 NVIDIA Pascal GPUs, for 20 minutes the following could be used:

Wait for the interactive job to give you control over the shell.

Before pulling an NGC image, authentication credentials must be set.This is most easily accomplished by setting the following variables in the build environment.

More information describing how to obtain and use your NVIDIA NGC API key can be foundhere.

Once credentials are set in the environment,we’re ready to pull and convert the NGC image to a local Singularity image file.The general form of this command for NGC HPC images is:

This singularity build command will download the app:tag NGC Docker image,convert it to Singularity format,and save it to the local file named local_image.

For example to pull the namd NGC container tagged with version 2.12-171025to a local file named namd.simg we can run:

After this command has finished we'll have a Singularity image file, namd.simg:

Running NGC containers

Running NGC containers on Palmetto presents few differences from the run instructions provided on NGC for each application.Application-specific information may vary so it is recommended that you follow thecontainer specific documentation before running with Singularity.If the container documentation does not include Singularity information,then the container has not yet been tested under Singularity.

As all NGC containers are optimized for NVIDIA GPU acceleration we will always want to add the --nv flagto enable NVIDIA GPU support within the container.

The Singularity command below represents the standard form of the Singularity command used on the Palmetto cluster.It will mount the present working directory on the host to /host_pwd in the container process andset the present working directory of the container process to /host_pwdThis means that when our process starts it will be effectively running in the host directorythe singularity command was launched from.

Simulate News

Engineers can use RSim without needing to learn C++ input commands or secure their own..
Siemens Broadens Simcenter STAR-CCM+ Capabilities
Company achieves AWS High-Performance Computing Competency status.
VI-grade Launches Scalable NVH Simulator 2021.0
New release incorporates new features and capabilities, and revises the pricing model and product structure..
NAFEMS to Host Simulation in Automotive Seminar
Virtual event to take place March 16-18 in 3 three-hour sessions.
More Simulate

Simulate Resources


Ansys Software 14.1 Installer

Product developers are under pressure to develop and deliver innovative new products in shorter amounts..

Technology Focus: Simulation and Design Software Innovations
In this Special Focus Issue, we take a look at new features and functions in design and simulation..

Simulate Resources

Latest News

Editor’s Pick: New platform for part deviation review

NVIDIA Unveils AI Enterprise Software Suite

All posts



By DE Editors

By DE Editors

ANSYS 14.5 is now available. The solution includes new multiphysics capabilities integrated with the ANSYS Workbench platform that the company says will enable improved engineering productivity and innovation.
The update delivers a number of new multiphysics solutions and enhancements to pre-processing and meshing capabilities, as well as a new parametric high-performance computing (HPC) licensing model to make design exploration more scalable.
“It’s no secret that while today’s products are getting smarter, they’re also becoming more complex,” said Jim Cashman, president and CEO at ANSYS. “Having a holistic view of the product requirements and design is crucial to reduce design uncertainty and ultimately create a successful product. Our customers are depending on the depth and breadth of ANSYS 14.5 and our Workbench platform to confidently predict how their products will perform and, at the end of the day, provide good value and satisfaction to their customers.”
The new HPC Parametric Pack amplifies the available licenses for individual applications (pre-processing, meshing, solve, HPC, post-processing), enabling simultaneous execution of multiple design points while consuming just one set of application licenses.
The solution introduces its first chip “package “system (CPS) design flow. This coupled approach addresses multidisciplinary requirements from the first phase of the design process and results in a final product whose individual components work together as an integrated system. ANSYS 14.5’s new CPS design flow links ANSYS subsidiary Apache Design’s integrated circuit (IC) power analysis products to ANSYS’ electromagnetic field simulation products. Additionally, the power delivery network channel builder automatically connects electronic package models from ANSYS SIwave to IC power simulations in Apache’s RedHawk and Totem for increased convenience.
ANSYS TGrid functionalities are integrated in the ANSYS Fluent environment in version 14.5 to further reduce pre-processing time. CAD readers and new advanced surface meshing capabilities are also integrated and available in a single user environment. Additionally, meshing enhancements allow for higher-quality hexahedral meshes that result in smaller problem size and overall reduced solver time.
The solution also includes extended fluid-thermal capabilities, such as two-way coupling between fluid simulation in ANSYS Fluent and electromagnetic field simulation in ANSYS Maxwell. The ANSYS Workbench platform supports the efficient coupling of multiple physics models and, when paired with this new feature, users can quickly and accurately predict losses and understand the effects of temperature on material performance in electromechanical devices such as motors and transformers, the company says.
With the integration of recently acquired ANSYS subsidiary Esterel Technologies’ SCADE Suite with ANSYS Simplorer in version 14.5, companies can virtually validate power electronic and mechatronic systems earlier in the design process by simulating the embedded software with the hardware, including electrical, mechanical and fluidic subsystems.
The ANSYS HFSS for ECAD capability contributes to accuracy by enabling engineers to run complex 3-D HFSS simulations directly from the ANSYS Designer layout-based interface and from other popular layout-based ECAD environments.
For more information, visit ANSYS.
Sources: Press materials received from the company and additional information gleaned from the company’s website.

Share This Article

Subscribe to our FREE magazine,
FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


Latest News

Editor’s Pick: New platform for part deviation review

NVIDIA Unveils AI Enterprise Software Suite

All posts