Author Archives: richardcoffey

Free “Supercomputing” MOOC starting on 28 August 2017

Published by:

By Dr David Henty, EPCC, The University of Edinburgh

Today’s supercomputers are the most powerful calculating machines ever
invented, capable of performing more than a thousand million million
calculations every second. This gives scientists and engineers a
powerful new tool to study the natural world – computer simulation.

This free 5-week online course will introduce you to what supercomputers
are, how they are used and how we can exploit their full computational
potential to make scientific breakthroughs. Register for the upcoming
run on 28th August at www.futurelearn.com/courses/supercomputing/.

———————————————————————-
This course was developed by EPCC at the University of Edinburgh and
by SURFsara as part of the EC-funded PRACE project.
———————————————————————-

SIGHPC Education SC16 Recap

Published by:

By the SIGHPC Education Committee

Thank you to all who attended our SC16 BoF. We had over 40 participants! If there’s one thing you do after reading this article, please join our various communities in social media and contribute to the cause.

  • https://www.facebook.com/sighpcedu/
  • https://twitter.com/sighpcedu
  • https://plus.google.com/communities/101759384971441116586
  • https://www.linkedin.com/groups/12019017

During this meeting we reviewed the need to share, the effort to promote, and things we could do immediately to improve HPC Education broadly.

David Halstead did quick polls during the meeting where attendees could log into a URL or text responses to questions asked of the group. Then Richard Coffey and David prompted the audience to give us feedback on what topics were important to them. Three main topics came up: 1) repositories for tools, 2) modifying curriculum for HPC, and 3) what can be done immediately.

Scott Lathrop headed up the repository breakout. He and his contributors provided the following:

One of the most important findings of this group was that they are not going to be providing the one repository for all. The group considered how HPC repositories should be cross-linking and providing visibility to their peer repos. Ownership (and piracy/theft) of code examples and curriculum was a concern for some during the discussion. Keeping the repositories fresh and up-to-date is also a challenge. The other thought was how to provide excellent examples to developers of these materials. During the conversation, the SuiteSparse Matrix Collection was proffered up as thorough example: http://www.cise.ufl.edu/research/sparse/matrices/. Finally, there was a great deal of delving into classifications of HPC work – the need for specialized, domain specific examples as well as generally classifying and reviewing examples.

The recommendations of this group were:

  • The community should search the problem domains for popular suites like SuiteSparse Matrix Collection
  • A great repository should have the following: data collections, code examples, curricular models, assignments & testing materials
  • HPC Education or others should provide hosting service (shared space) for metadata, pointers, and possibly the data as well
  • This service should have a reviewing process, possibly like a google recommendation
  • There is interest in a more formal review process that can test the ability to replicate the curriculum.
  • Don’t reinvent the wheel! Talk with librarians, creative commons, and other folks who can help with metadata

Richard facilitated the “what can be done immediately” group and reported back these findings:

The general consensus is that there is too much fragmentation on the education and not enough focus. Nearly all of the breakout group members reported they didn’t know about resources such as hpcuniversity.org and csinparallel.org. There was interest in a survey out to the community asking questions like: what is the HPC maturity of  your students, what are the needs of the instructors/trainers, and how does your organization support HPC education (if at all)? Additioanlly, there was interest in creating a shared virtual office where members of the organization could be called or emailed with questions on how to get going.

David led the modifying curriculum breakout and reported back the following:

There is a need for a standardized and accessible  “Introduction to Computing” to help on-board students and researchers into computing enabled science. The discussion also touched on how to develop consistent templates for undergraduate computing courses. Finally, there was a great deal of interest in mechanisms and  opportunities to “train the trainers”. David reported back that just sitting around talking about curriculum in the open setting helped suss out wheels already invented and lessons learned. Clearly, facilitating a self-help forum has real value for the participants.

Conclusion:

Clearly, between the working groups there were common themes. Also, one of the purposes of SIG HPC Education is to help facilitate awareness and provide cross linking. Finally, there’s opportunity for SIG HPC Education to foster opportunities such as hangouts on curriculum and developing a public forum for discussing pedagogy.

We Need to Talk—About Software

Published by:

By David E. Bernholdt, Oak Ridge National Laboratory, for the IDEAS Productivity Project

In high-performance computing (HPC) we talk a lot about hardware. In computational science and engineering (CSE), we talk a lot about the scientific discoveries and results. But the software that allows us to get those results? Not so much.

The reason is simple: CSE’s professional rewards system focuses more on the results than the tools. Discussions about software engineering best practices, how to make software more sustainable, and the interplay between hardware architecture and software architecture in large, long-lived software packages are rare indeed; it can be hard to find the time, or place, for such conversations.

That’s beginning to change, however. An increasing number of voices are speaking out about the value of software and mounting attempts to resolve the field’s issues. One emerging voice is the IDEAS project, a first-of-a-kind effort supported by the United States Department of Energy to focus on issues of productivity, quality, and sustainability and one in which I’m deeply involved.

The IDEAS project is contributing to several critical software discussions, such as the meaning of interoperability for numerical libraries and the need to develop a set of standards. We’re listening to the broader software engineering community and the HPC and CSE communities to identify and document best practices for software development in a way that makes them easier for HPC/CSE practitioners to digest and adopt.

We’re also working hard to broaden the audience for these discussions through a variety of training and community-building activities such as partnering with several DOE computing facilities (ALCF, NERSC, and OLCF) to offer a webinar series on Best Practices for HPC Software Developers. We are presenting a tutorial at SC16 in Salt Lake City on Testing of HPC Scientific Software and organizing a birds-of-a-feather session on Software Engineering for CSE on Supercomputers. If that sounds interesting (and trust me it is), you might also want to check out the Fourth International Workshop on Software Engineering for High Performance Computing in Computational Science & Engineering organized by our collaborator, Jeff Carver. We’ve got big plans for upcoming meetings too, like SIAM CSE17.

Finally, we’re trying to nucleate an online community, the CSE Software Forum, with a collection of community resources to support these critical conversations. That’s still early in development, but you can register for the mailing list to hear about events and the CSE Software site.