Go to Diversity in HPC home page

Faces of HPC: Guido Falk von Rudorff

Guido Falk von Rudorff is a postdoctoral researcher at Universität Basel in Switzerland. He completed his PhD at the University College London in the department of Physics and Astronomy, and now works on methods to reduce the computational effort required to predict molecular properties.

Introduction

Guido has always been interested in automating tedious tasks and wants to use this to improve the quality of work that humans output. Because of this, he uses HPC frequently and considers it to be extremely valuable in solving problems.

Biography

Tell us a bit about yourself – where you’re from, what you’ve studied and where, and what some of your outside interests are.

Originally from the countryside in Germany, I studied Physics in Berlin at the Freie Universität. It’s a great place to interact with students from other departments – not only to borrow concepts or methods from other disciplines, but also to learn something about different subjects. After a brief period as researcher in Halle, I moved to the UK for my PhD at University College London. I particularly enjoyed the Thomas Young Centre there which connects researchers of all career stages across different London universities. Their seminars and events ensured that students did not become too focused on the topic of their PhD. Currently, I am a postdoc at Universität Basel in Switzerland.

What is your current job? Describe what you do in HPC. Is this your main interest, or something you fell into?

While the focus of my work is computational chemistry, HPC is a highly valuable if not strictly necessary tool to achieve progress. Over the past years, HPC enabled me to follow up on several questions ranging from surface chemistry (chemistry focused on the processes occurring at interfaces, especially between liquid and gas) to membrane dynamics (the properties of membranes) to molecular cluster optimisation (the improvement of collections of atoms/molecules). Currently, I am working on methods to reduce the computational effort to predict molecular properties This currently takes a lot of (computational) power due to how complex these properties can be. In this context, scalable high-throughput calculations are absolutely required.

How did you become interested in HPC? Briefly describe your path into HPC.

When I started with computers in general, I was fascinated by automation of mundane tasks. Not only because some things are boring, but also because repetitive work is error prone – it’s not something humans are particularly good at. With that in mind, I heard a lecture years later about numerical methods and quantum chemical calculations which was so captivating that I had to give it a try and – so far – never gave it up.

As part of this project we want to celebrate the diversity of HPC, in particular to promote equality across the nine “protected” characteristics of the UK Equality Act, which are replicated in world-wide equality legislation. Do you feel an affiliation with this matter, and if so how has this interacted with or impacted your job in the HPC community?

Personally, I support every action aimed at equal opportunities for everyone. Surely, research benefits from bringing as many different viewpoints, concepts and ideas together as possible which is why diversity in all aspects is key to success.

Is there something about you that’s given you a unique or creative approach to what you do?

I would not go so far to claim the uniqueness of this approach, but I try to design my work according to three principles. If you have two ideas and one of them can be automated, go with that one, because compute time is scalable, while human time is not. If you have a problem, try to make it look like something that you know from different academic disciplines: usually a similar problem already has been solved. When in doubt, ask experienced people concise questions – and be available for others to ask you.

Were there any challenges when you first entered the field? How have you overcome these, or do they continue to challenge you?

My main concern is about book-keeping data. Documenting your work (both the successful ideas and the unsuccessful ones) is central to academic work. However, this is hard to do reliably without too much human interaction. In particular, documenting analysis scripts and software is a challenge, because they do not form standalone entities (automatically generated data) unlike simulation data where you have a clear input, output and software involved. My current approach is to give each simulation setup I prepare a unique (random) ID and to make versioned analysis scripts query data from whatever host there is that stores the simulation data for that ID. This allows for on-the-fly analysis but also for archiving data arbitrarily across machines. This is effective but not the most efficient. I feel that more concepts have to be developed to properly document research progress digitally and efficiently.

What’s the best thing about working in HPC?

The joy when so many machines are all doing their part in a large workflow to produce research insight. And, to be honest, that you can be productive while sleeping, because the cluster is doing its job.

If there’s one thing about HPC you could change, what would it be?

If I were an engineer, I would like to see faster storage on clusters, because writing intermediate results or ephemeral (short lasting) restart information must be done but leaves CPUs idle. If I were a manager, I would like to give users more insight into the current plan of the scheduler (the part of the operating system that initiates and terminates jobs) to tailor jobs that can fill the inevitable gaps in the scheduler plan such that the overall usage can be improved. As a user, I would like to have closer contact with the experts running the clusters to make the most of the available knowledge.

What’s next for you in HPC – where does your career lead you?

In the context of HPC, I see two perspectives I would like to follow. The first is to balance quantum chemical methods of different cost to obtain reliable information on a large problem set. The second one highlights the importance of being earnest in the analysis steps of research: unsupervised analysis methods with anomaly detection to remove human bias in time series analysis.

Last updated: 08 Aug 2018 at 11:48