Space University of Florida - The Foundation of the Gator Nation
University of Florida College of Liberal Arts and Sciences
Space
Quantum Theory Project QTP Home page
Slater lab

John C. Slater Computing Laboratory

The Slater Lab provideds computing support for QTP. This includes support for desktops and laptops, the computer network and high performance computing and visualization. It also includes support and training for software engineering and programming, in particuilar parallel programming.

News and announcements

  • Termination of the Slater Lab
    October 14, 2011

    All business IT services will be handed over to UF Enterprise Services (email) or CLASnet (Home disk, web service, and printers), and the HPC Center (research computing). All QTP servers will be turned off on Wednesday November 23, 2011.

  • Changes in the Slater Lab
    August 4, 2011

    With the general support for Research Computing by the Unievrsity, all research computing activities supported by the John Slater Laboratory will be moved to the UF HPC Center. This includes all clusters for computation and all scratch disks /scr/arwen_1 and /scr/arwen_2 (Roitbeerg), /scr/crunch_4 (Cheng), /scr_crunch_5 (Bartlett), /scr/crunch_6 (other QTP groups). /scr/wukong_2 (merz) was moved over a year ago.

  • Changes in Compute Clusters
    January 20, 2010

    The Merz group has acquired a new cluster, called kongming. The new cluster is operated by the UF HPC Center and is under control of the HPC Center Moab scheduler. Jobs must be submitted from the HPC submit nodes submit.hpc.ufl.edu. There are two load balanced nodes, submit1 and submit2, accessed under the same name.

    You can use HPC nodes test02 and test04 for interactive work that goes beyond normal job and file management tasks that can be performed on the submit nodes. Thus on the HPC linx64 is replaced by submit1, submit2, test04, and test05.

    The kongming nodes are dedicated to the Merz group memebrs and are selected in the job file as other QTP clusters are selected now, namely from the job script by specifying the attribute "kongming" with the -l option

       #PBS -l nodes=1:ppn=12:kongming
    	      
    The nodes are identified by their rack and slot number, such as r14a-s20, which it is the left node in slot 20 while r14b-s20 is the right-hand node (there are two independent nodes per rack slot). The nodes are also known by their "kongming" aliases such as kongming1, kongming2, etc. but this is mostly irrelevant since users should not log into individual notes since they are under the control of the scheduler.

    Several changes will occur that are prompted by this acquisition. Each step will be announced separately.

    1. Move of QTP Moab Scheduler to linx64
      Next week Tuesday Jan 26, 2010 the QTP Moab scheduler, now running on wukong will be stopped and will start running on linx64. Since all QTP jobs already have to be submitted from linx64, this will not make a big difference to users. The main disruption will be that running jobs will be killed when the old scheduler is turned off.
      Please let me know if that causes a problem.
    2. Move of wukong to Larsen Hall machine room
      On the same day as the scheduler move to minimize disruption of running jobs, the storage server wukong will be moved to the Larsen Hall machine room. This server hosts the file system /scr/wukong_2 which is accessed by the kongming, wukong, ra, and crunch servers.
      The wukong cluster will be moved at the same time as well. This will optimize network connectivity and power and cooling management of the server and the cluster.
      The /scratch/wukong file system will then become available on all HPC nodes, including kongming, wukong, and ra. It is already available at this time to support kongming. It can be accessed from Mac and Linux desktops and laptops through the HPC center Samba server smb://samba.hpc.ufl.edu. Please open a bugzilla ticket to request access through SAMBA.
      The /scr/wukong_2 fle system on the QTP systems will no longer be available.
    3. Maintenance on ra
      Finally, the nodes of the ra cluster will be updated with the latest HPC node software to make it compatible with the current HPC and kongming clusters.
      The racks of ra will be moved within NPB 1114 to improve air flow and cooling.
      Following this work, the ra nodes will be scheduled by the HPC Center scheduler as well.
    4. Wider use of ra capacity
      With the addition of the ra cluster to the HPC Center scheduling pool, QTP users will have access to a proportionately larger share of the HPC Center resource pool.
      The Merz group members will retain first priority.
    Please contact me if you have any questions or concerns about these changes.
  • HyperChem 8 license for NPB
    September 20, 2009

    The compuational chemistry software HyperChem 8 for Windows is available to all members of QTP. We have a license that will work in the Physics Building network Versions for MacOS and Linux are expected in a few months.
    Details on how to get the CD and how to install HyperChem on your laptop or desktop can be found here

>> top

Space Space Space
Space
Have a Question? Contact us.
Last Updated 10/15/11
 
University of Florida