SYMMICTM Users Manual
Version 3.1.6
(TM) Trademark 2008 CapeSym, Inc.

CapeSym Table of Contents

Running on a Remote System

The xSYMMIC command line utility can be invoked on a remote system through the Remote run... menu item in the Solve menu of the GUI. This menu item opens the Remote Run dialog shown below. A template does not need to be open in SYMMIC in order to use Remote Run, but the Template box will be automatically filled with the filename of the currently open template. A template file can be selected by using the browse button (...) next to the Template edit box. Once a remote run has been launched, the template file should not be changed until the job is complete and the solution files have been downloaded. Also, any unsaved changes to the template(s) should be saved prior to using the Remote Run dialog.



The Options button opens a dialog for setting the command line options for xSYMMIC. The same options dialog is used in the Background Run dialog. Please refer to the sections entitled Command Line Utility and Background Run for a complete description of the Options settings.

The template file will be solved by launching xSYMMIC in a Linux system located anywhere on the network. The remote system must have xSYMMIC installed and licensed. If the remote system is a cluster instead of a single workstation, then the machine used to start the parallel processing (i.e. the login or master node of the cluster) must be able to checkout a valid solver license. The rest of the nodes do not need a license to participate in the parallel computation started by the master. However, all machines must have the latest release of xSYMMIC, with the installation “bin” directory included in the PATH environment variable. Consult the Chapter 1 section on Linux Installation of xSYMMIC for more details.

In the Remote Run dialog, enter the IPv4 address of the remote system and the port for communication via secure shell (SSH). Port 22 is used for SSH by default. The username of an account on the remote system must also be specified. This user must have access to xSYMMIC, as well as to any libraries required by xSYMMIC, such as the MPI library when using a cluster.

The template file(s) will be uploaded to a working directory on the remote system. To use the default directory on a single workstation, which is typically the user's home directory, the Working directory edit box can be left blank. For cluster computing, the working directory should specify the path to a network file system accessible by every compute node. The user's account will be used on every node in the cluster to run xSYMMIC, so the user account on every machine must have the same path to the network file system. If desired, a relative path from the user's home directory can be used for the working directory. For example, instead of "/home/Matt/nfs/" the relative paths "./nfs/" or "nfs/" may be used, but some shortcuts like "~/nfs/" may not work. In all cases, the working directory path should end in a slash. If it does not, the dialog will append a '/' to it before the Launch.

When the Connect button is pressed, the Remote Run dialog will attempt a secure connection to the remote system. This can take a while, especially if the IP address is not valid or accessible. You may need to wait a couple of minutes for the bad connection to timeout before you get control of the dialog again.




When the connection is first established, the remote system will respond with the SHA1 fingerprint of its host key to allow confirmation that the intended host has been reached instead of an imposter. If the fingerprint of the remote system is known in advance, enter it in the Fingerprint field prior to pressing the Connect button so that it can be validated. If no fingerprint is specified or it does not match the one received from the remote system, the dialog will notify the user of the host's fingerprint and ask whether the connection should continue. If you are willing to take the security risk, click Yes and the connection will continue trying to authenticate. Copy the fingerprint from the bottom of the dialog to the Fingerprint box so that future connections will be automatically validated.

To establish a secure connection, the user must provide authentication acceptable to the remote system. A commonly accepted method is public-private key pair authentication. If you have the private key file which corresponds to the public key of the remote machine, place the path to this file in the Private key file box. Both .pem and .ppk file formats are acceptable. If no private key is available or it fails, the dialog will check for authentication by username and password. If the remote system allows password authentication, the user will then be asked to enter the password. The password is sent securely to via SSH for authentication.




When the secure connection has been authenticated, the Connect button will be grayed and the Launch button will be enabled. Pressing the Launch button will cause all of the problem files to be uploaded to the working directory on the remote system using secure copy protocol (SCP). Then xSYMMIC will be invoked on the remote system to solve the problem. When the computation has finished, the solution files will be downloaded using secure file transfer protocol (SFTP). All of these secure communications mean that encryption is used during transfer to protect the data.

Note: Any unsaved changes for the Remote Run template should be saved to disk before Launch.

After launch, the progress of the template file transfers will be displayed in the text area at the bottom of the dialog. When the execute command is issued, the console output of xSYMMIC on the remote system will be transferred to the local machine via SSH and also displayed in the dialog. At any time the dialog may be closed without affecting the remote run activity. When the dialog is open again, the text received from the remote system while it was closed will be displayed and the dialog will continue to be updated as the run progresses. During the run the Launch button will change to an Abort button which can be used to terminate the process. After the execute command finishes, all of the solution files will be downloaded from the remote system to the folder where the original template file is located. A message box will appear stating that the remote run is complete. If an error or user abort occurred at any point in the sequence, a message that the remote run has failed will appear instead. If the run has completed but the Remote Run dialog appears to be frozen, look for the run completion message box, which may be hiding under another window on the desktop.



When the launch terminates, the Launch button is grayed and the Connect button again becomes enabled. To begin a new remote run, the user must use the Connect button to re-establish the secure connection prior to launching a new run. If a connection was established but a launch is no longer desired, the user should push the Disconnect button to release the connection for future use.

For a remote system consisting of a single computer, keep the default settings in the Message Passing Interface (MPI) parameters section of the dialog. When the Number of processes is 1, the target is assumed to be a single computer and so a normal xSYMMIC command will be used to solve the problem. As shown in the above dialog on the left:

cd /home/Matt; xSYMMIC FETbig.xml; echo $?

The normal command line is preceded by a change of directory from the user's home directory to the working directory for the problem, and followed by an echo of the return status. When xSYMMIC is invoked in this manner without any MPI parameters, the solver will use all of the cores available on the remote machine to solve the problem as fast as possible. MPI parallelism will not be used for superposition. Each part of the superposition calculation will be computed in sequence.

When the calculation is complete, the solution files will be downloaded and the connection closed, as shown in the above dialog on the right. The user may then use Load solution... from the Solve menu to view the results. Note that the template file must be open in SYMMIC for the Load solution... menu item to be available. The recorded values file will have been downloaded to the same directory as the original template file. These values can be viewed through the Results > Record values... menu item.

Using the MPI Parameters

To solve problems faster using parallel computing, set the parameter Number of processes to a value greater than 1. The problem will then be divided up over the MPI processes (or ranks). This will have the greatest benefit when the problem consists of a large mesh and there are a large number of compute nodes available in a cluster. As noted above, the MPI library must be installed on the machines of the cluster to take advantage of parallel computing.



As shown in the above dialog, the command used for parallel computing is:

mpiexec -n 4 -ppn 2 -hostfile hosts -print-rank-map xSYMMIC FETbig.xml

This command requests that MPI start four processes running the xSYMMIC code on the compute nodes listed in the hosts file. Two processes are distributed to each host in the list until all the four processes have been assigned. The -print-rank-map option causes mpiexec to report the distribution of ranks to hosts, as shown in the dialog on the left. The four processes in this case are denoted by ranks 0, 1, 2, and 3. Rank 0 and 1 processes are running on host HPCL8, while rank 2 and 3 processes run on HPCL7.

In the above dialog on the right, MPI debugging was enabled by setting the environment variable I_MPI_DEBUG=4 on the remote system login shell. MPI debug mode reveals the actual pinning of computational resources to each process. The rank 0 process is running on HPCL8 using 16 hyper-threads numbered {0,1,2,3,4,5,6,7,16,17,18,19,20,21,22,23}. Rank 1 is using the remaining hyper-threads on HPCL8, while hyper-threads on compute node HPCL7 are similarly divided between ranks 2 and 3.

The File to use: edit box in the MPI parameters allows the user to specify a hostfile for the mpiexec command line. If this box is blank, no -hostfile or -machinefile option will be written, in which case all parallel processes will run on the cores of the single machine given by the Host (IP address). To use multiple machines, the names of these machines must be listed in a hostfile or machinefile. By checking the radio button next to Use Hostfile the user may chose to provide a text file in the hostfile format which simply lists the names (or IP addresses) of the compute nodes that will be used to solve the problem. The contents of the hosts file for the above example were as follows:

HPCL8
HPCL7
HPCL5
HPCL3

These are the names of the computers in the cluster as given in the /etc/hosts file on those machines. The MPI process manager determines how to allocate the computing resources (i.e. hyper-threads) of these machines to the processes, guided by the processes per node (-ppn) flag.

To get more control over process pinning, use the alternative machinefile format by selecting the radio button next to Machinefile. The machinefile option does not require the processes per node flag because the machinefile can specify how processes are mapped to computing resources. For complete details see the Intel MPI libraries documentation. In the example dialogs shown below, the machines file contained the following lines:

HPCL8:1 binding=domain=8
HPCL7:1 binding=domain=8
HPCL5:1 binding=domain=8
HPCL3:1 binding=domain=8

This file specifies that each machine receive 1 MPI process (by the :1 suffix) and that each process will be given 8 hyper-threads of computing resources from that machine. As shown in the left-side dialog below, the machinefile causes the processes per node option to not be issued. The four processes are distributed one to each host. As shown in the debug output at the bottom of the dialog on the right, hyperthreads {0,1,2,3,16,17,18,19} on HPCL8 are assigned to rank 0, while the other ranks also receive 8 hyperthreads from the compute node. Note that xSYMMIC will report 4 MPI processes and 4 OpenMP threads for each process, because xSYMMIC further restricts its parallelism to 1 thread per physical core. For domain=32 on a 16-core system, xSYMMIC would use 16 OpenMP threads.



Process pinning can be used instead of MKL_NUM_THREADS to control the number of threads used by the solver, so we recommend that the MKL_NUM_THREADS and OMP_NUM_THREADS environment variables remain undefined in the user's shell on the remote system. The numerical libraries will use the optimum number of threads per process for multi-threading when these variables are undefined. Setting these environment variables to a smaller or larger number of threads than are available to the process may result in slower performance.

The Download from Cluster button can be used to download a hostfile (or any file) from the remote system, using the secure protocol SCP. Simply enter the directory path and filename desired and press OK. The file will be downloaded to the current local directory and entered into the File to use text box, and the Use Hostfile radio button will be selected. The user should edit these settings as necessary.

Running in the Cloud

The examples shown above were carried out on a local cluster, but Remote Run works equally well for connecting to compute instances in the cloud. Simply replace the local cluster IP address with the public IP address of a cloud instance. Most cloud instances only allow authentication via private key, so the private key file of the cloud instance must saved locally and then used for authentication.

The cloud instances must also have a licensed copy of xSYMMIC. Cloud licenses for xSYMMIC are provided in the Amazon Elastic Compute Cloud (EC2) by the xSYMMIC in the Cloud product. SYMMIC users may access this product by creating an Amazon Web Services (AWS) account, and then launching an instance using the xSYMMIC in the Cloud machine image available in the AWS Marketplace. For details see the SYMMIC in the Cloud section later in this chapter. Make sure that the security group rules for the EC2 instance allow the SSH input port (22) from your desktop IP address so SYMMIC can access the instance, and allow all TCP output ports (the default) so the instance can communicate with the license server.

Problems may also be solved on a cluster of AWS instances in EC2 using the xSYMMIC in the Cloud product. In this case, AWS Cluster should be selected as the Remote machine type. Doing so will fill the Run as username and Working directory fields with the appropriate values for this type of cluster. When one connects to an AWS Cluster a special setup is automatically performed. The private key file is uploaded to the cluster and installed in ~/.ssh/id_rsa. This is necessary to allow the master instance to communicate with the other computer nodes. Note: No upload of the private key file is performed when the Remote machine type is set to Linux.




After the private key upload to the AWS Cluster, the hostfile will be automatically downloaded to the local directory where the template file is located and the status of the download reported. This "ipaddresses" file contains the list of network addresses of all of the machines in the cluster. It can serve as the hostfile for the mpiexec command, so the "ipaddresses" filename is automatically inserted in the File to use: box for mapping processes to nodes. If desired, the user may edit this file locally or choose another file, prior to launching the run.

Once the run is launched, everything operates the same as described above for a local cluster. In summary, selecting AWS Cluster as the Remote machine type simply suggests a username and working directory, performs the setup step and downloads the hostfile automatically. Otherwise, it operates the same as the Linux machine type.




Please see the section on SYMMIC in the Cloud for more information on the AWS Cluster product.

About MPI Implementations

The Remote Run dialog was designed and tested with the Intel MPI libraries. It may also be possible to use Remote Run with other MPI implementations, such as OpenMPI and MPICH, but this has not been tested. Most other MPI implementations support the mpiexec command, but the command line flags may be different. Intel, OpenMPI, and MPICH support the number of processes (-n) flag, as this is part of the MPI specification. OpenMPI does not currently support the -ppn flag, as shown in the table below. However, the hostfile option in OpenMPI may be used to control the distribution of processes to nodes. For OpenMPI and MPICH the hostfile is the same as the machinefile, so these options can be used interchangeably. On Intel MPI the hostfile is just a list of the names of the nodes to run on, while the machinefile allows the mapping of processes to nodes. There are differences in the formats for these files, so please consult the documentation for the MPI libraries being targeted for full file specifications.

Library

Recent Version

MPI

Startup

Number of Processes

Processes Per Node

-hostfile format

-machinefile

Intel MPI for Linux

2018

3.1

mpiexec

-n

-ppn

name

name:ppn

Intel MPI for Windows

2018

3.1

mpiexec

-n

-ppn

name

name:ppn

MPICH

3.2

3

mpiexec

-n

-ppn

name:ppn

name:ppn

Open MPI

3.0.0

3.1

mpiexec

-n


name slots=ppn

name slots=ppn

CapeSym > SYMMIC > Users Manual > Table of Contents

© Copyright 2007-2024 CapeSym, Inc. | 6 Huron Dr. Suite 1B, Natick, MA 01760, USA | +1 (508) 653-7100