University of Canterbury Visit 2017

A visit to the University of Canterbury was conducted on February 15, 2017. The University of Canterbury used to have its own impressive collection of HPC facilities. Alas, much of that has now been decommissioned (although Popper is still operational) with users largely moved to the national facilties, coordinated by NESI and hosted at NIWA and the University of Auckland respectively. The NIWA system is a 3200+ core P575/POWER6 system running AIX, whereas the University of Auckland system is a 6,000+ core system running Linux with over 40 GPU devices. Both use Infiniband as their interconnect (DDR for NIWA, QDR for AU). Canterbury is, however, heavily involved in the QuakeCORE project, building a national network of leading New Zealand Earthquake Resilience Researchers.

The main issues confronting HPC at the University of Canterbury are familiar; the cost of operating such facilities, the level of user education required, and almost paradoxically, their necessity for the processing of large datasets. With regards to the first issue significant interest was expressed in "the Melbourne model" where larger computational resources were made available on a limit budget through use of RDMA over Converged Ethernet and use of cloud resources for single-node multinode jobs. With regards to the second issue the University is promoting a staged approach where users can move gradually from the problems of an overwhelmed desktop system to using cloud resources, to using testing HPC on the cloud, to using smaller departmental HPCs systems, to moving to the peak facilities.

Special thanks are given to the members of the various New Zealand facilities who took their time to accommodate my visit and provide tours of their facilities. This includes Dan Sun, Sung Bae, Daniel Lagrava, and Francois Bissey at the University of Canterbury.

Originally posted at: