The past few days I've been quite influenced by a short essay by Brianna Wiest, which starts with quite a kicker: "It is the hardest thing you will ever have to do, and it will also be the most important: stop giving your love to those who aren’t ready to love you. Stop having hard conversations with people who don't want to change." Ultimately, this is arguing for equality in all relationships. Equality is not a matter of capability or social position.
This year I completed my MHEd thesis (early) to complete the degree in this field. The following is the abstract and link to the study.
In 2018 Intel x86 microprocessors were particularly susceptible to the Meltdown security vulnerabilities, whereby any system that allowed out-of-order execution was potentially vulnerable to an attack where a process could read memory that it was not authorised to do so . As this vulnerability did not affect AMD processors, suggestions were raised that AMD could be a more effective choice for HPC environments. In the same year, as a topic at International Supercomputing Conference, the European Processor Initiative (EPI), a program to develop processors for domestic supercomputers, based on ARM (Advanced RISC Machine) and RISC-V, "European Processor Accelerator" system-on-a-chip . With the benefits of four years of hindsight, it is valuable to consider the current trends in microprocessor architecture.
A wide analysis was recently presented at HPCAsia2021  that conducts a detailed analysis of the trends of the last 27 year from over 10,000 computers from the Top500, with even more detailed analysis of 28 systems from 2009 to 2019. Of particular note in this context is the steady growth in recent years of heterogeneous supercomputers i.e., systems with GPGPUs to 28% of the Top500 with an increase of 1% per annum. The authors note: "We expect this increasing trend will continue, particularly for addressing technological limitations controlling the power consumption", a claim that could certainly be justified with the use of Nvidia GPUs or Intel Xeon Phi (discontinued as of 2020) as co-processors. At the time most systems were clustered around 1 GB per core with only three contemporary systems at 2 GB per CPU core, there was a wide variation in compute performance and parallel file system storage, and an increasing use among the most powerful systems of burst buffer storage to overcome the performance gap between memory and the file system.
The 2022 eResearch New Zealand conference was held on the 9th to 11th of February, co-hosted by New Zealand eScience Infrastructure (NeSI), REANNZ, and Genomics Aotearoa, and preceded by Carpentaries Connect, a community of institutions and universities that run Software Carpentry, Data Carpentry, and Library Carpentry workshops.
The complexity of many contemporary scientific workflows is well-known, both in the laboratory setting and the computational processes. One discipline where this is particularly true is biochemistry, and in 2017 the Nobel Prize in Chemistry was awarded for the development of cryo-electron microscopy (cryo-EM). This allows researchers to "freeze" biomolecules in mid-movement and visualize three-dimensional structures of them, aiding in understanding their function and interaction which is, of course, essential in drug discovery pipelines.
"In The Long Run We Are All Dead: A Preliminary Sketch of Demographic, Economic, and Technological Trends for the University Sector"
University participation and costs are both rising. What evidence is there that the public is receiving good value? Using demographic data, trends, and analysis from Australia, and considering contemporary developments in information and communications technology, a preliminary assessment is made for the economic and political future of the public university sector.
The importance of John Lions to computing history has spanned decades and continues to do so. In 1976 he published, through the University of New South Wales, the Lions' Commentary on UNIX 6th Edition, with Source Code. The book was both explanatory for the UNIX kernel, but also as a teaching tool. The content was extraordinarily well-written, explaining difficult concepts with a remarkable coherence and also engaging in an early form of instructional scaffolding.
For the third time in the past year I have sat down to watch "The Wind Rises", the fictionalised biography of Jiro Horikoshi by Hayao Miyazaki and animated by Studio Ghibli, the title derived from a line in Paul Valéry's "The Graveyard by the Sea", "Le vent se lève! Il faut tenter de vivre!" In this tale, Jiro is an engineer who follows his childhood dreams of designing aircraft. However, the setting is imperial Japan in the 1930s and Jiro's employer, Mitsubishi, is under direction to build efficient planes for the purposes of warfare.
When I rage-quit, I don't mince my words.
I am going to have to withdraw from the XXXXX and the XXXXX course.
The content that has been provided from the latter is infuriating. It is everything that I have considered wrong in teaching computing tools over the past twenty years, and have witnessed ever-increasing numbers of learners who leave such courses who are increasingly ignorant of IT systems.
As dataset size and complexity requirements grow increasingly researchers need to find additional computational power for processing. A preferred choice is high performance computing (HPC) which, due to its physical architecture, operating system, and optimised application installations, is best suited for such processing. However HPC systems have historically been less effective at the visual display, and least of all in an interactive manner, leading into a general truism of "compute on the HPC, visualise locally".