QTP Home page | ||||
Slater Lab clusters User Guide
Recent changes in QTP clustersThe summer of 2008, we have upgraded the software, operating system and compilers, on the QTP clusters arwen, haku, ra, surg, ock, wukong The new version of software and its organization will be the same as used in the UF HPC Center to make it easier to mix the use of both systems in the same project. QTP linux clusters are now using Torque and Moab as the resource manager and scheduler, respectively. This makes the behavior consistent with the UF HPC Center and allows us to use the HPC Center license, which includes the capability to do UF wide grid computing and grid scheduling. To manage jobs on Linux clusters arwen, haku, ra, surg, ock, wukong: Use ssh to connect to linx64, and use commands qstat, qsub, qdel, etc. to manage jobs. The Moab scheduler and Torque resource manager run on wukong, and thus all jobs will have ID's of the form ######.wukong. The interactive node linx32 still needs to be upgraded as of May 6, 2009. This may happen or not as 32 bit machines are old and too slow for most users. At this time it cannot be used to submit any jobs or check their status. The details of the queues, such as names and default and maximum limits for walltime and RAM per CPU can be found on the Moab/Torque guide page. The old QTP clusters clusters simu and atanasoff have been dismantled during the summer. Some of the simu nodes are still running but they are no longer supported. Use them as they may fit your projects. Closer connection to HPCThe new software on the nodes and on linx64 is the same as the software on the HPC center cluster. Thus the lates Intel compilers are available on the new nodes. In addition the Lustre parallel files systems /scratch/ufhpc (30 TB) and /scratch/crn (80 TB) are mounted on linx64 and on the QTP cluster nodes except arwen nodes. Thus enabling easier access to the same files when working on a large project using QTP and HPC Center resources. Howver, this connection is over Gigabit Ethernet and not over InfiniBand. Therefore the performance is good very general file manipulation. For some applications the performance is even good enough to store high-intensity scratch files, like integral files for Gaussian. Details about the HPC center cluster can be found at the HPC Center web site. The clusters amun, arwen, haku, ra, surg, ock, wukongArwen was installed in Spring 2004, amun, haku, and ra in Summer 2005, surg Winter 2006, ock in Winter 2007, and wukong in Summer 2008. amun was dismantled to make room for wukong.
Characteristics
The Linux clusters are suited for calculations that require:
The QTP Linux clusters do not have a fast interconnect. For parallel jobs that require many processors to communicate with each other the HPC Center clusters are a better choice.
Commands
Logging in
Scheduling principles
Cluster ownership and usageThe primary use of each QTP cluster is as follows:
A member of QTP can make individual arrangements with the principal investogator of each research group to make use of the cluster for special projects. The HPC Center cluster can be used by any researcher on campus. Prof. Cheng (phase II and Phase III) and Prof. Merz (Phase III) have invested in the HPC Center at the faculty level which gives the jobs submitted by members of their research groups higher priority; QTP has invested at the department/institue level in Phase III, which gives jobs from all members of QTP higher priority. The College of Liberal Arts and Sciences has invested at the college level and this gives an advantage for all researchers in CLAS. |
||||
Have
a Question? Contact us. Last Updated 5/6/09 |