Space University of Florida - The Foundation of the Gator Nation
University of Florida College of Liberal Arts and Sciences
Space
Quantum Theory Project QTP Home page
Slater Lab

Computational chemistry and computational physics software



Aces II Components of the ACES (Advanced Concepts for Electronic Structure) program package created by Rod Bartlett and his coworkers. It does Many Body Perturbation Theory and Coupled Cluster calculations and geometry optimizations.
COLUMBUS The COLUMBUS Program System is designed for performing ab initio multireference single- and double-excitation configuration interaction (MR-CISD, ACPF, AQCC, CEPA) calculations. Maintained by Ron Shepard.
Crystal98 1.0 The Crystal98 1.0 ab-initio program for SCF and DFT on 3D periodic structures from Saunders, Dovesi, Roetti, Causa, Harrison, Orlando and Zicovisch-Wilson.

See the user manual /usr/local/doc/crystal98/manual98.pdf for further details.
The executables for Crystal98 have the standard names with c98 prepended. They are
  • c98scf
  • c98scfdir
  • c98integrals
  • c98properties
  • c98convert
  • c98loptcg
  • c98bulk
Dalton 2.0 Integral and first and second derivative integral program, second order direct MCSCF program and property programs developed by Hans-Joergen Jensen, Trygve Helgaker and Hans Agren.
ENDyne 5 The molecular dynamics program ENDyne and its graphical interface written by Erik Deumens implementing the END theory of Erik Deumens and Yngve Ohrn.

Example LoadLeveler job file for a serial run

#!/bin/ksh
# @ output = pHe.out
# @ error = pHe.err
# @ class = lhugex
# @ notification = never
# @ notify_user = deumens@qtp.ufl.edu
# @ checkpoint = no
# @ restart = no
# @ requirements = (Arch == "p2sc") && 
                       (OpSys == "AIX51")
# @ queue

HOST=`uname -n`
echo $HOST
MOL=pHe
SCR=/scr_1/tmp/$HOST.$MOL_ed.$$
mkdir $SCR
cp $HOME/END/$MOL.inp $SCR/
cd $SCR
echo "running version 2..."
time endyne -e 2.7.6 $MOL.inp > ${MOL}v2.log 2>&1
echo "running version 5..."
time endyne -e 5 $MOL.inp > ${MOL}v5.log 2>&1
echo "========= files ========="
ls -l
echo "========================="
cp ${MOL}*.log $HOME/END/
echo done
		
GAMESS 99 Ab-Initio program (General Atomic and Molecular Electronic Structure System) developed by M.W.Schmidt, K.K.Baldridge, J.A.Boatz, S.T.Elbert, M.S.Gordon, J.H.Jensen, S.Koseki, N.Matsunaga, K.A.Nguyen, S.Su, T.L.Windus, M.Dupuis, J.A.Montgomery.
gOpenMol 2.1 gOpenMol is the graphical interface to OpenMol and is molecular display software. Consult the excellent tutorial to learn about the program. gOpenMol is started on the Simu cluster with the command rungOpenMol.
Machines Processor Comments
buddy SPARC
MESA
On SPARC systems, a version with software rendering is installed.
Gaussian03 The Gaussian 03 program system for ab-initio electronic structure calculations created by John Pople and his collaborators. All environment variables are set by default on the QTP system. Just type g03 <command-file> to run it. We have axecutables installed for
  1. Solaris 9/SPARC on BUDDY,
  2. Linux/IA32 (g03-C.02)
  3. Linux/x86_64 (g03-D.01)
  4. Linux/AMD (g03-E.01)
  5. Linux/EM64T (g03-E.01) on the Linux clusters.

To run Gaussview on linx64you should set the environment variable export g03root=/share/local/lib/g03-EM64T
. $g03root/g03/bsd/g03.profile
Then the alias gv will be defined to start Gaussview. Gaussview is an X11 application. It will require an application like X-Win32 on a Windows computer to view Gaussview. MacOS and Linux desktop and laptop computers have X11 built in. The x11 also must support OpenGL for Gaussview to work. It does not work on the old QTP SUN stations.

You can start GaussView with
gv -mesagl
which will use a software OpenGL driver so that no special hardware is needed. Possible hardware conflicts with the graphics on the X-display computer can be avoided this way as well.

You should set the environment variable GAUSS_SCRDIR=/scr_1/tmp/your_directory;export GAUSS_SCRDIR before running the command g98 or g03. By default GAUSS_SCRDIR=/scr_1/tmp and then the 6 am cleaning daemon will remove all integral files. By setting the variable to the correct directory as explained in diskspace instructions, this is avoided.

You must contact Erik Deumens to sign a non-disclosure agreement before you will be allowed to access the program.

Current status of platforms and versions.
Machines Processor Version Command
buddy SPARC g03 B.04 g03
arwen00-1d IA32 g03 C.02 g03-32
haku00-1f
surg1-7
wukong0-f
x86_64 EM64T g03 E.01
or
g03 D.01
g03-EM64T
or
g03-64
ra00-3i x86_64 AMD g03 E.01
or
g03 D.01
g03-AMD
or
g03-64
Note that g03-64 and g03-EM64T both work on Intel EM64T nodes, aand that g03-64 and g03-AMD both work on AMD Opteron nodes; g03-32 works on all Linux nodes.

Example PBS job file for a 4-way parallel run of g03-64 on SURG

#!/bin/bash
#PBS -N g03par
#PBS -o g03par.out
#PBS -e g03par.err
#PBS -m abe
#PBS -M XXXXX@qtp.ufl.edu
#PBS -q brute
#PBS -l nodes=1:surg:ppn=4

# Use local scratch
mkdir /scr_1/tmp/`hostname`.g03.$$
GAUSS_SCRDIR=/scr_1/tmp/`hostname`.g03.$$
export GAUSS_SCRDIR
cd /scr_1/tmp/`hostname`.g03.$$
# Make sure the job.in file has the Gaussian 
# directive to use 4 CPUs
cp $HOME/job.in .
g03-64 job.in > job.out
cp job.out $HOME/job.out
cd ..
rm -rf `hostname`.g03.$$
echo done
		
HyperChem 8 The computational chemistry software HyperChem version 8 for Windows created HyperCube is available to all members of QTP and the University of Florida for installation on laptops and desktops. The procedure is as follows:
  1. Check out the CDs from Erik Deumens in NPB 2334. Please bring the CDs back within one day.
  2. You will get two CDs. The first CD contains the standard HyperChem version 8 installation. It should be installed first. use the options standalone and software license. The CD will also install POVray, a free sofwtare package, that is needed to display molecular densities. The default installation directory is C:\Hyper80.
  3. Once the installation completes succesfully, you need to remove the executable file chem.exe (the icon is a green beaker) from the folder C:\Hyper90\Program and replace it with chem.exe from the second CD. This new executable has the license to run on a computer with an IP address in the range 10.244.0.0 as used inside the New Physics Building.
  4. If you install this on your laptop, HyperChem will only work when the laptop is connected to the network in the Physics building by cable, not by wireless and not in any othr location. If you need a license that works elsewhere, you must contact HyperCube and ask for an individual machine license. Then the installation of the first CD is sufficient.
MOLCAS A complete active space graphical unitary group program written by Bjorn Roos and Roland Lindh and others.
MOPAC 93 General molecular orbital program written by Michael Dewar and James Stewart and collaborators. MOPAC93 ha sbeen installed. MOPAC2000 is maintained by Fujitsu.
mopac.exe -- 50 Heavy Atoms 100 Hydrogens (SUN and AIX)
mopac-big.exe -- 200 Heavy Atoms 350 Hydrogens (SUN Only)
NWChem 4.5 NW Pacific National Laboratory's parallel software package for SCF, DFT, MBPT2 and CCSD and properties.
NWChem is software for computatiinal chemistry spcifically designed for large scale parallel computation. See the NW Pacific National Lab wbe pages for capabilities and for the user's manual. The user and programmer manuals can be found in PDF format in /usr/local/doc/nwchem. To run NWChem, you must make the symbolic link
ln -s /usr/local/etc/default.nwchemrc $HOME/.nwchemrc
Example input decks can be found in /usr/local/src/nwchem/4.5/QA.
The following binaries are available:
Machines Processor Comments
buddy SPARC
serial
The executable is not parallel capable. To run it, type
nwchem system.nw > system.out
Here system.nw is your NWChem input file.
To run in some form of parallel execution, create a work directory and put a file called nwchem.p with one line per machine
echo $USER `hostname` $NPROC /usr/local/bin/nwchem $PWD > nwchem.p
where NPROC is the number of CPUs to use on the machine. Then run with
nwparallel nwchem system.nw > system.out

An example input deck for a run on Xena II is (to be saved as nwchem.job):

#!/bin/ksh
# @ job_type=parallel
# @ class = lhugex
# @ output = nwchem.out
# @ error =  nwchem.err
# @ requirements = (Arch == "p2sc")
# @ notify_user = your_name@qtp.ufl.edu
# @ min_processors = 12
# @ max_processors = 15
# @ network.lapi = css0,not_shared,US
# @ environment = COPY_ALL; MP_PULSE=0; 
             MP_SINGLE_THREAD=yes; 
             MP_WAIT_MODE=yield; restart=no
# @ queue
export CNAME=nwchem
export CALC="/scr_2/tmp/$machine.nwchem.$$"
mkdir $CALC
export PROC=p2sc
export LOCAL="`pwd`"
date
uname -a
cd $CALC
if [ `pwd` != $CALC ]
then 
   echo "In `pwd` instead of $CALC: STOP!"
   exit
fi
echo '******* NWChem will start now *******'
nwchem $LOCAL/$CNAME".nwi" $LOCAL/$CNAME".nwo"
echo '******* NWChem exited *******'
echo "starting clean-up"
cd ..
/bin/rm -r $CALC
echo done
exit
with the NWChem input deck (to be saved as nwcehm.nwi):
start
title "q0l-02f, Me2N-NO2, Cs, 
         CCSD(T)-fc/cc-pVTZ//B3LYP/6-31G**"
geometry autosym
N -.567554 -.224363 .000000
N .804248 -.034262 .000000
O 1.360973 .024660 1.097057
O 1.360973 .024660 -1.097057
C -1.239866 .053279 -1.263921
C -1.239866 .053279 1.263921
H -.755338 -.502712 -2.064122
H -2.273466 -.281438 -1.163781
H -1.227028 1.121644 -1.516981
H -.755338 -.502712 2.064122
H -1.227028 1.121644 1.516981
H -2.273466 -.281438 1.163781
end
basis spherical nosegment
C library cc-pvtz
H library cc-pvtz
N library cc-pvtz
O library cc-pvtz
end
memory 512 mb
echo
ccsd; maxiter 40; freeze atomic; end
task ccsd(t)

task shell all "/bin/rm -f *"
OpenDX 4.2 OpenDX is the open successor of IBM Data Explorer is a powerful visualization tool. Data Explorer is started with the command dx.
Machines Processor Comments
desktops
buddy
SPARC
MESA
On SPARC systems, a version with software rendering is installed.
simu2
simu3
simu4
simu5
simu6
simu7
power3
GTX1000
OpenGL
On the Simu cluster the hardware OpenGL version is installed. I works on nodes which have a high-end graphics adapter.
simu1
simu8
simu9
power3
MESA
It does not work these nodes at this time.
Siesta 1.3 Siesta 1.3 is installed on the simu cluster and the Atanasoff SP system. It can run in parallel. The documentation is in /usr/local/doc/siesta/user_guide.ps.

Example LoadLeveler job file for a parallel run

#!/bin/bash
#@ shell = /bin/bash
#@ job_name = siesta_h2o
#@ output   = siesta.out
#@ error    = siesta.err
#@ job_type = parallel
#@ class    = quick
#@ node_usage = shared
#@ tasks_per_node = 1 
#@ node        = 2
#@ wall_clock_limit = 1:00:00
#@ notification     = never
#@ network.MPI      = css0,shared,us
###@ environment = MPI_SHARED_MEMORY=yes; 
            MP_SAVEHOSTFILE=hf.$(host).$(jobid); 
            COPY_ALL;
#@ queue

# Use global scratch disk
SCR=/scr_2/tmp/`hostname`.my_h2o.$$
mkdir $SCR
cd $SCR
# Copy input to work directory
cp $HOME/h2o.fdf .
# Run
/usr/bin/poe /usr/local/bin/siesta < h2o.fdf 
         > h2o.out
# Copy output to a safe place
cp h2o.out $HOME
# Remove scratch files
cd ..
rm -r $SCR
with the input file for water h2o.fdf
SystemName          Water molecule
SystemLabel         h2o
NumberOfAtoms       3
NumberOfSpecies     2

%block ChemicalSpeciesLabel
 1  8  O # Species index, atomic number, 
                     species label
 2  1  H
%endblock ChemicalSpeciesLabel

AtomicCoordinatesFormat  Ang
%block AtomicCoordinatesAndAtomicSpecies
 0.000  0.000  0.000  1
 0.757  0.586  0.000  2
-0.757  0.586  0.000  2
%endblock AtomicCoordinatesAndAtomicSpecies

MD.TypeOfRun             Nose # Type of dynamics:
MD.InitialTemperature    300 K
MD.TargetTemperature     300 K
MD.InitialTimeStep         1
MD.FinalTimeStep         100
MD.LengthTimeStep        1.0 fs
You can download siesta.job and h2o.fdf as examples.
Turbomole 5.3 Turbomole 5.3 is installed on the IBM SP systems SIMU and Atanasoff. It can run in parallel on the SP. Members of Bartlett's and Richards' groups are included in the license. If you are interested in using the software, contact Prof. Richards.
To add the Turbomole commands to your path, type the command
turbopath
to run Turbomole in parallel, type in addition the command
turbopathmpi
The documentation is on the SPARC systems in /usr/local/doc/turbomole.ps, the commands are available on the RS/6000 for p2sc, power2 and power3. On Xena II the architecture is p2sc.

The codumentation explains how to create the control file and initial coordinates and other files. The command define is used to create a control file. It may refer to other files with coordinates and basis specifications.

Example LoadLeveler job file for a serial run

#!/bin/ksh 
# @ output = sertur.out 
# @ error = sertur.err 
# @ class = lhugex 
# @ notification = never 
# @ notify_user = greene@qtp.ufl.edu 
# @ checkpoint = no 
# @ restart = no 
# @ requirements = (Arch == "p2sc" && 
                     OpSys == "AIX51") 
# 
# mandatory directive to process the job. 
# @ queue 
# 
# User commands follow.... 
echo "pure serial job:" 
echo master: `uname -n` 
# ==== begin executable statement 
. /usr/local/lib/turbomole/scripts/turbopath
scr=/scr_1/tmp/`uname -n`.sg.$$
mkdir $scr
cd $scr
cp /ufl/chem/nxr/sg/RUB/OXID/SER/* .
# Geometry optimization
jobex
# single DFT SCF
#dscf > dscf.out
cp * /camp/crunch_1/chem/nxr/sg/TURBO/RUBox/SER
# ==== end executable statement 
echo "Job done." 

Example LoadLeveler job file for a parallel run

You must create a control file with define just is in the case of the serial file. Then you must prepare for a parallel run. This must be done on one of the interactive nodes of Xena II. Create a directory for the files for this particular run. Put a file host.list in it with the following content:

xena00
xena10
xena20
xena30
xena40
xena50
xena60
xena70
xena80
xena90
xenaa0
xenab0
Then run the command turbo_start -batch_prepare. Since this is a parallel command, you must issue turbopathmpi first. Answer all questions. Note that before doing a DSCF, or GRAD, or geometry optimization calculation, you must run the statistics calculations for DSCF and GRAD. You can run these interactively with turbo_start -batch or submit them to LoadLeveler. The statistics runs are short. They generate files that are needed by the other calculations.

The actual SCF or gradient or geometry calculations should be submitted to LoadLeveler with a script like the folowing. Parallel calculations are always prepared with turbo_start -batch_prepare and executed in the LoadLeveler script with turbo_start -batch. Note the following when answering the questions asked by turbo_start -batch_prepare:

  • Because Turbomole uses a master-client architecture for parallel execution, it is necessary to assign one more process. Thus to run two-way parallel, you must ask LoadLeveler to assign three processes. When you create the control file as explained in the manual, you must specify that you want N clients, and you must specify N+1 processes in the LoadLeveler job file.
  • The directory for scratch files should be a global directory accessible by all nodes. For Xena II this is /scr_2/tmp/.... It is recommended that you create a keep-directory, such as /scr_2/tmp/keep.sg.13-04-03, so that it will not be deleted by the scratch cleaning process and so that you can specify it in your LoadLeveler script (see example below).
  • When answered whether the machine is in dedicated use, you answer yes for Xena II and no for Simu.
#!/bin/ksh 
# @ output = partur.out 
# @ error = partur.err 
# @ class = lhugex 
# @ notification = complete 
# @ notify_user = greene@qtp.ufl.edu 
# @ checkpoint = no 
# @ restart = no 
# @ requirements = (Arch == "p2sc" && 
                        OpSys == "AIX51") 
# @ wall_clock_limit = 10:00,9:00 
# 
# ---- begin specification of the kind
# ---- of parallel job 
# @ job_type = parallel 
# @ node_usage = shared 
# @ node = 3,3 
# @ tasks_per_node = 1 
# @ network.MPI = css0,not_shared,US 
# ---- end specification of the kind 
# ---- of parallel job 
# 
# mandatory directive to process the job. 
# @ queue 
# 
# User commands follow.... 
echo "pure MPI job on 2 different nodes:" 
echo master: `uname -n` 
# ==== begin executable statement 
. /usr/local/lib/turbomole/scripts/turbopath
. /usr/local/lib/turbomole/scripts/turbopathmpi
# Need global directory for parallel execution
scr=/scr_2/tmp/keep.sg.13-04-03
mkdir $scr
cd /ufl/chem/nxr/sg/RUB/OXID/PAR/
cp control control_seq basis coord alpha beta $scr/
cp xyzcase xyzproc $scr
cp DSCF-par-stat $scr
cp GRAD-par-stat $scr
cd $scr
turbo_start -batch
cp * /camp/crunch_1/chem/nxr/sg/TURBO/RUBox/PAR
# ==== end executable statement 
echo "Job done." 
VASP 4.6 VAMP/VASP is a package for performing ab-initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set. The documentation is available online: Users Guide
with a special discussion about Berry phase
The same information is available in P{ostScript format in the usual place /usr/local/doc/vasp/
The following binaries are available:
Machines Processor Comments
HPC cluster EM64T
AMD
parallel
The program is started with the command vasp
VMD 1.7 and 1.8.1 VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
Machines Processor Comments
buddy SPARC
MESA
The version 1.8 with software rendering is installed.

>> top

Space Space Space
Space
Have a Question? Contact us.
Last Updated 7/15/09
 
University of Florida