Episodes
Bioinformatics and more widely Computational Biology is a largely data-driven Science. The array of high-throughput technology platforms in the last 10 years mean that the amount of data being generated in this field is likely to enter into Exabytes by 2020. The challenges associated with this are quite different from the data sets generated by High Energy Physics or Astrophysics in that they tend to gathered from a wide variety of different providers. Meta-analyses of these data sets can...
Published 03/21/14
Performing complex solar shading analysis to take into account the sun's path and solar penetration on large buildings has historically consumed very many CPU cycles for IES "Virtual Environment" (3D building physics) simulation users. One particularly complex model took almost 2 weeks to process.
A description of the collaboration between IES and EPCC to apply HPC practices to the solar analysis algorithm and benchmark results will be presented. Additionally we will cover the pitfalls found...
Published 03/14/14
Intel will provide an insight into future HPC technology development looking at hardware trends, ecosystem support and the challenges around ExaScale computing.
The talk will also touch upon the convergence of High Performance Computing and High Performance Data Analytics, examining where the effective use of this rapidly maturing capability can provide industry and academia with a competitive advantage.
This talk was given as part of our MSc in HPC's 'HPC Ecosystem' course.
Talk slides
Published 02/28/14
PrimeGrid is a volunteer computing project that gives participants the chance to be the discoverer of a new world record prime number! In addition, we are working towards the solution of several mathematical problems which have remained unsolved for over 50 years. The talk will cover some basic facts about prime numbers, the history of the search for large primes (and a little of the maths!), and show the audience how they can use their computers to join PrimeGrid and find new primes of their...
Published 09/07/13
There are several important science and engineering problems that require the coordinated execution of multiple high-performance simulations. Some common scenarios include but are not limited to, "an ensemble of tasks", "loosely-coupled simulations of tightly-coupled simulations" or "multi-component multi-physics simulations". However, historically supercomputing centers, have supported and priortised the execution of single "jobs" on supercomputers. Not suprisingly, the tools and...
Published 04/02/13
In this talk I will give a brief history of parallel processing in games and how the industry has responded to hardware changes in its constant race to create games with more, better and faster. I then consider some of the lessons we have learned so far and finish with my opinion on how a future game engine might be structured to target many-core architectures.
This talk was given as part of our MSc in HPC's 'HPC Ecosystem' course.
Talk slides
Videos
Published 03/22/13
The Met Office hosts some the largest computers in the UK to predict the weather and changes in the climate. Best known for the Public Weather Service, the Met Office also advises UK government and other organisations world wide. The difficulties of modelling such a complex system as the Earth's atmosphere, how it is actually done and what future computational challenges lie ahead will be discussed.
This talk was given as part of our MSc in HPC's 'HPC Ecosystem' course.
Talk slides
Published 03/15/13
This talk will look at the drivers leading the way to Exascale computing. It will examine technology trends, and use these to infer some of the characteristics of future large scale HPC systems. It will also look in to what this means for the software environment, reliability, use and affordability of IT systems at the high end. Recently, the IT industry has become more focused on problems involving the use and management of large data sets (Big Data) and we map this trend on to the...
Published 03/01/13
Building design has been revolutionised in recent years. New materials and better construction techniques have allowed bespoke and
impressive public spaces to be created. Consider the Millennium Dome, Hong Kong airport, or the 2012 Olympic stadium.
These bespoke spaces represent a significant challenge for fire safety. Unlike for conventional buildings, with regular layouts and
dimensions, there is very little experience to suggest how fires might develop in these spaces, and how people...
Published 03/23/12
The past five years have seen the use of graphical processing units for computation grow from being the interest of handful of early adopters to a mainstream technology used in the world’s largest supercomputers. The CUDA GPU programming ecosystem today provides all that a developer needs to accelerate scientific applications with GPUs. The architecture of a GPU has much to offer to the future of large-scale computing where energy-efficiency is paramount. NVIDIA is the lead contractor for...
Published 03/02/12
We will review some basic properties and theorems regarding prime numbers, and take a quick trip through the history of prime number searching. Secondly, we will discuss two classes of algorithms of importance for computational primality testing - Sieving and the Lucas-Lehmer (and similar) tests - and their implementations on modern CPUs and GPUs. Finally, we will introduce GIMPS and PrimeGrid, two large and well-known distributed prime search projects.
Links:
Talk slides
Published 10/14/11
There is a plethora of version control systems and it is not obvious which to choose. Do
you pine for the warm, comforting blanket of CVS? Fear not! I shall give an overview of the zoo of version control systems available to the modern programmer. I'll describe the design principles (there is much overlap) and how these influence the use patterns. I hope to convince you that modern systems are at least as good as CVS and Subversion, and to indicate some of the areas in which they make a...
Published 03/16/11
Cloud computing has become one of the most advertised and talked about technologies for the past few years since many ISPs/IT firms have begun to adopt virtualisation technologies. As cloud technologies mature, more and more businesses are moving their services into “the cloud†or building internal clouds to cater for their IT needs. Gihan Munasinghe & Tabassum Sharif of Flexiant discuss the advantages of having cloud infrastructures, introduce Flexiscale and look into how you can...
Published 02/11/11
Fluidity is a powerful Computational Fluid Dynamics framework developed at Imperial College over the last 20 years. Fluidity can be applied to a number of scientific applications, ranging from classic CFD, to oceans and multi-material problems. It uses a variety of discretisations on an unstructured mesh and includes a number of novel and innovative features, such as adaptive re-meshing, a user friendly interface, and a python interface. The main advantage Fluidity has is the adaptive...
Published 01/28/11
The Materials Chemistry Consortium (MCC) is the single largest consumer of resources on the HECToR National Supercomputer. In this talk I will give an overview of how MCC members exploit HECToR in a wide range of materials chemistry research and the types of applications that we are interested in.
Links:
Slides - Talk slides
Published 01/21/11
Traditional disk technology has lagged behind processor and memory technology for some time. During this session we will discuss what the problems are and how new technologies such as enterprise class Solid State Drives (SSD) are helping. Until now, high performance storage systems have been designed around the characteristics of spinning magnetic media. This session will also discuss design considerations for such high performance systems, an overview of storage virtualization, and how these...
Published 03/19/10
The Met Office has long used high performance computing to support its activities in weather prediction and climate research. This talk starts by giving an overview of the numerical weather prediction process and continues with a look at the HPC platforms used to make solving these problems possible. Some of the challenges currently faced, particularly in the area of scalability, are presented and recent progress in these areas discussed.
Links:
Slides - Talk slides
Published 02/26/10
This talk traces the development of the different types of instruction-level parallelism that have been incorporated into the hardware of processors from the early 1960s to the present day. We will see how the use of parallel function units in the CDC 6600 eventually led to the design of the Cray-1 with its vector processing instructions and superscalar processors such as the Alpha. The talk also covers Very Long Instruction Word systems and SIMD systems. The talk is accompanied by visual...
Published 02/05/10
Lattice QCD is a very important tool for understanding the Standard Model of particle physics. It presents enormous computational challenges, both in carrying out simulations and in making sure they are as efficient as possible. This lecture explains why we need lattice QCD; describes how the UKQCD Collaboration sets out to overcome these computational challenges; and illustrates how GPUs are used here in Edinburgh.
Links:
Slides - Talk slides
Published 01/22/10