# Running Matlab on Peregrine

## Table of Contents

- Running MATLAB interactively
- Running MATLAB in batch mode
- MATLAB versions and licenses
- Parallel Computing Toolbox
- Additional Resources

## Running MATLAB interactively on Peregrine

To run MATLAB interactively through the GUI, you need to have an X server application on your local system. If you use a Mac, make sure you have **XQuartz** installed. If you are using a Windows system, several options exist, and before you proceed further, you should install one. We recommend **Cygwin/X**.

### Windows Users:

- Launch Cygwin/X
- Set the DISPLAY variable:

$ export DISPLAY=:0.0

3. Connect to Peregrine

$ ssh -Y $USER@peregrine.hpc.nrel.gov

Enter your password.

4. Check that you can launch remote X clients in your ssh session. For example,

$ xterm &

should open an xterm window on your local machine. If it doesn't, something is wrong and you won't be able to run the Matlab GUI until the problem is resolved. Please note that you need to run Cygwin/X rather than ordinary Cygwin.

### Mac Users:

Make sure that you use the -Y option on your ssh command:

$ ssh -Y $USER@peregrine.hpc.nrel.gov

### All Users:

Next, start an interactive job with the -X option:

$ qsub -A <account_string> -I -l nodes=1 -V -X

When your job starts, you will have a shell on a compute node.

### Notes on starting interactive jobs:

- To submit an interactive job you must include the -A <account_string> flag followed by a valid account string. For more information see Accounts and Allocations.
- For more information on interactive jobs see Running Interactive Jobs.

### Loading the MATLAB module and starting MATLAB:

Load the MATLAB module to set up your user environment, which includes setting the location of the license server.

$ module load matlab

To bring up the MATLAB GUI:

$ matlab &

This will use X11 windows on your laptop and you will be able to use the GUI as you would if it was running directly on your laptop. The ampersand "&" lets MATLAB run as a background job so the terminal is freed up for other uses. For a simple MATLAB terminal (no GUI), do:

$ matlab -nodisplay

Below is an example MATLAB script, matlabTest.m, that creates and populates a vector using a simple for-loop and writes the result to a binary file, x.dat. The shell script matlabTest.sub can be passed to the scheduler to run the job in batch (non-interactive) mode. To try the example out, create both matlabTest.sub and matlabTest.m files in an appropriate directory and cd to that directory. Next, call qsub, using the -W option to have it wait until a MATLAB license is available before scheduling your job:

$ qsub -W x=GRES:MATLAB matlabTest.sub

Calling qstat should show that your job is queued:

Job ID Name User Time Use S Queue

------------------------- ---------------- --------------- -------- - -----

<JobID>.admin2 test_matlab username 0 Q batch

Once the job has finished:

Job ID Name User Time Use S Queue

------------------------- ---------------- --------------- -------- - -----

<JobID>.admin2 test_matlab username 00:00:01 C batch

The standard output is saved in a file called test_matlab.o<JobID>, standard error to test_matlab.e<JobID>, and the binary file x.dat contains the result of the MATLAB script.

### Notes on matlabTest.sub file:

- Setting a low walltime increases the chances that the job will be scheduled sooner due to backfill.
- The -A <account_string> flag must be followed by a valid account string or the job will encounter a permanent hold (it will appear in the queue but will never run). For more information see Accounts and Allocations.
- The environment variable $PBS_O_WORKDIR is set by the scheduler to the directory from which the qsub command was executed, e.g., /scratch/$USER. In this example, it is also the directory into which MATLAB will write the output file x.dat.

**matlabTest.sub**

#!/bin/bash --login #PBS -l walltime=05:00 # Maximum time requested for job (5 min.) #PBS -l nodes=1 # Number of nodes

#PBS -N matlabTest # Name of job #PBS -A <account_string> # Program-based WCID (account string associated with job)

# load modules

module purge module load matlab

# execute code cd $PBS_O_WORKDIR # Change directories (output will save here) matlab -nodisplay -r matlabTest # Run the MATLAB script

**matlabTest.m**

format long xmin = 2; xmax = 10;

x = zeros(xmax-xmin+1,1); for i = xmin:xmax

display(i);

x(i-xmin+1) = i

end savefile = 'x.dat';

save(savefile,'x','-ASCII') exit

## MATLAB versions and licenses on Peregrine

### A note on MATLAB versions

The default version available on HPC is R2016a.

Version R2014a is also available via:

$ module load matlab/R2014a

### A note on MATLAB licenses

MATLAB is proprietary software. As such, HPC users have access to a limited number of licenses both for the base MATLAB software as well as a few specialized toolboxes. To see which toolboxes are available, regardless of how they are licensed, start an interactive MATLAB session and run

>> ver

This command will list nearly every available MATLAB toolbox (the list is too long to print here). However, the majority of the toolboxes do not have active licenses. Exceptions are:

MATLAB Compiler Version 6.2 (R2016a)

MATLAB Distributed Computing Server Version 6.8 (R2016a)

Parallel Computing Toolbox Version 6.8 (R2016a)

For a comprehensive list of the MATLAB-related licenses available on HPC, as well as their current status, run the following terminal command:

$ lmstat.matlab

Among other things, you should see the following:

Feature usage info:

Users of MATLAB: (Total of 6 licenses issued; Total of ... licenses in use)

Users of Compiler: (Total of 1 license issued; Total of ... licenses in use)

Users of Distrib_Computing_Toolbox: (Total of 4 licenses issued; Total of ... licenses in use)

Users of MATLAB_Distrib_Comp_Engine: (Total of 16 licenses issued; Total of ... licenses in use)

This documentation only covers the base MATLAB package and the Parallel Computing Toolbox, which check out the "MATLAB" and "Distrib_Computing_Toolbox" licenses, respectively.

## Parallel Computing Toolbox on Peregrine

The Parallel Computing Toolbox (PCT) provides the simplest way for HPC users to run parallel MATLAB code on a single, multi-core compute node. Here, we describe how to configure your local MATLAB settings to utilize the PCT and provide some basic examples of running parallel code on Peregrine. For more extensive examples of PCT usage and code examples, see the Mathworks documentation page.

### Configuration of Parallel Computing Toolbox in MATLAB R2014a

Configuration of the PCT is done most easily through the interactive GUI. However, the opening of parallel pools can be significantly slower in interactive mode than in non-interactive (batch) mode. For this reason, in this document the interactive GUI will only be used to set up your local configuration; runtime examples will include batch scripts that submit jobs directly to the scheduler.

To configure your local parallel settings, start an interactive MATLAB session with X11 forwarding (see Running MATLAB interactively on Peregrine above). Open MATLAB R2016a and do the following:

- Under the Home tab, go to Parallel > Parallel Preferences.
- In the Parallel Pool box, set the "Preferred number of workers in a parallel pool" to at least 24 (the max number of cores currently available on a Peregrine compute node).
- Click OK.
- Exit MATLAB.

For various reasons, you might not have 24 workers available at runtime; in this case, MATLAB will just use the largest number available.

### Examples

The goal of this section is to demonstrate how to use the PCT on a single compute node on Peregrine. It describes how to open a local parallel pool and provides some examples of how to use it for parallel computations. Because the opening of parallel pools can be extremely slow in interactive sessions, the examples here will all be restricted to non-interactive (batch) job submission.

**Note:** Each example below will check out one "MATLAB" and one "Distrib_Computing_Toolbox" license at runtime.

**Hello World Example**

In this example, a parallel pool is opened and each worker identifies itself via spmd ("single program multiple data"). Create the MATLAB script helloWorld.m:

% open the local cluster profile

p = parcluster('local');

% open the parallel pool, recording the time it takes

tic;

parpool(p); % open the pool

fprintf('Opening the parallel pool took %g seconds.\n', toc)

% "single program multiple data"

spmd

fprintf('Worker %d says Hello World!\n', labindex)

end

delete(gcp); % close the parallel pool

exit

To run the script on a compute node, create the file helloWorld.sub:

#!/bin/bash

#PBS -l walltime=05:00

#PBS -l nodes=1

#PBS -N helloWorld

#PBS -A <account_string>

# load modules

module purge

module load <path_to_matlab_R2014a_module>

# define an environment variable for the MATLAB script and output

BASE_MFILE_NAME=helloWorld

MATLAB_OUTPUT=${BASE_MFILE_NAME}.out

# execute code

cd $PBS_O_WORKDIR

matlab -nodisplay -r $BASE_MFILE_NAME > $MATLAB_OUTPUT

where, again, the fields in < > must be properly specified. Finally, at the terminal prompt, submit the job to the scheduler:

$ qsub -W x=GRES:MATLAB helloWorld.sub

The output file helloWorld.out should contain messages about the parallel pool and a "Hello World" message from each of the avaiable workers.

**Example of speed-up using parfor**

MATLAB's parfor ("parallel for-loop") can be used to parallelize tasks that require no communication between workers. In this example, the aim is to solve a stiff, one-parameter system of ordinary differential equations (ODE) for different (randomly sampled) values of the parameter and to compare the compute time when using serial and parfor loops. This is a quintessential example of Monte Carlo simulation that is suitable for parfor: the solution for each value of the parameter is time-consuming to compute but can be computed independently of the other values.

First, create a MATLAB function stiffODEfun.m that defines the right-hand side of the ODE system:

function dy = stiffODEfun(t,y,c)

% This is a modified example from MATLAB's documentation at:

% http://www.mathworks.com/help/matlab/ref/ode15s.html

% The difference here is that the coefficient c is passed as an argument.

dy = zeros(2,1);

dy(1) = y(2);

dy(2) = c*(1 - y(1)^2)*y(2) - y(1);

end

Second, create a driver file stiffODE.m that samples the input parameter and solves the ODE using the ode15s function.

%{

This script samples a parameter of a stiff ODE and solves it both in

serial and parallel (via parfor), comparing both the run times and the

max absolute values of the computed solutions. The code -- especially the

serial part -- will take several minutes to run on Peregrine.

%}

% open the local cluster profile

p = parcluster('local');

% open the parallel pool, recording the time it takes

time_pool = tic;

parpool(p);

time_pool = toc(time_pool);

fprintf('Opening the parallel pool took %g seconds.\n', time_pool)

% create vector of random coefficients on the interval [975,1050]

nsamples = 100; % number of samples

coef = 975 + 50*rand(nsamples,1); % randomly generated coefficients

% compute solutions within serial loop

time_ser = tic;

y_ser = cell(nsamples,1); % cell to save the serial solutions

for i = 1:nsamples

if mod(i,10)==0

fprintf('Serial for loop, i = %d\n', i);

end

[~,y_ser{i}] = ode15s(@(t,y) stiffODEfun(t,y,coef(i)) ,[0 10000],[2 0]);

end

time_ser = toc(time_ser);

% compute solutions within parfor

time_parfor = tic;

y_par = cell(nsamples,1); % cell to save the parallel solutions

err = zeros(nsamples,1); % vector of errors between serial and parallel solutions

parfor i = 1:nsamples

if mod(i,10)==0

fprintf('Parfor loop, i = %d\n', i);

end

[~,y_par{i}] = ode15s(@(t,y) stiffODEfun(t,y,coef(i)) ,[0 10000],[2 0]);

err(i) = norm(y_par{i}-y_ser{i}); % error between serial and parallel solutions

end

time_parfor = toc(time_parfor);

time_par = time_parfor + time_pool;

% print results

fprintf('RESULTS\n\n')

fprintf('Serial time : %g\n', time_ser)

fprintf('Parfor time : %g\n', time_par)

fprintf('Speedup : %g\n\n', time_ser/time_par)

fprintf('Max error between serial and parallel solutions = %e\n', max(abs(err)))

% close the parallel pool

delete(gcp)

exit

Finally, create the batch script stiffODE.sub:

#!/bin/bash

#PBS -l walltime=20:00

#PBS -l nodes=1

#PBS -N stiffODE

#PBS -A <account_string>

# load modules

module purge

module load <path_to_matlab_R2016a_module>

# define environment variables for MATLAB script and output

BASE_MFILE_NAME=stiffODE

MATLAB_OUTPUT=${BASE_MFILE_NAME}.out

# execute code

cd $PBS_O_WORKDIR

matlab -nodisplay -r $BASE_MFILE_NAME > MATLAB_OUTPUT

Next, submit the job (which will take several minutes to finish on Peregrine):

$ qsub -W x=GRES:MATLAB stiffODE.sub

If the code executed correctly, the end of the text file stiffODE.out should contain the times needed to compute the solutions in serial and parallel as well as the error between the serial and parallel solutions (which should be 0!). There should be a significant speed-up -- how much depends on the runtime environment -- for the parallelized computation.

## Additional Resources

Presentation slides and code examples are available for NREL users here: