Glossary

Autonomic Computing an initiative started by IBM in 2001. Its ultimate aim is to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. Thus autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users.

CFD Computational fluid dynamics (CFD) is one of the branches of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the millions of calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions.

Cloud computing a style of computing in which dynamically scalable and often virtualised resources are provided as a service over the Internet. In theory users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them.

Cluster a widely-used term meaning independent computers combined into a unified system through software and networking.

CPU Central Processing Unit or processor.

CUDA Compute Unified Device Architecture is a compiler and set of development tools that enable programmers to use a variation of C to code for execution on a Graphical Processing Unit (GPU).

DEISA The Distributed European Infrastructure for Supercomputing Applications

Distributed computing A distributed computing system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal.

EGEE Enabling Grids for E-sciencE is a series of projects funded by the European Commission. It connects more than 70 institutions in 27 European countries to construct a multi-science computing Grid infrastructure for the European Research Area, allowing researchers to share computing resources.

GPU Graphical Processing Units (also GP-GPU General Purpose Graphical Processing Unit).

HEC High-End Computing captures the use of leading-edge IT resources and tools to pursue research; including computer simulation and modelling, manipulating and storing large amounts of data, and many other methods to solve research problems that would otherwise be impossible.

HPC High Performance Computing. The use of (parallel) supercomputers and computer clusters, computing systems made of multiple (usually mass-produced) processors linked together in a single system with commercially available interconnects.

Hub and Spoke A distribution paradigm (or model or network) is a system of connections arranged like a chariot wheel, in which all traffic moves along spokes connected to the Hub at the centre. The model is commonly used in distributed computing. In a spoke-hub network the hub is likely to be a single point of failure, one of the reasons for favouring a dual-Hub approach within HPC Wales.

Monte Carlo methods a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used when simulating physical and mathematical systems. Markov chain Monte Carlo (MCMC) methods (which include random walk Monte Carlo methods), are a class of algorithms for sampling from probability distributions based on constructing a Markov chain that has the desired distribution as its equilibrium distribution.

MPI Message Parsing Interface is a programming technique which uses software libraries to turn serial applications into parallel ones which can run on distributed memory systems. This is now the one of the standard paradigms for parallel programming in both C, C++ and Fortran languages.

MRI Magnetic Resonance Imaging (MRI), or nuclear magnetic resonance imaging (NMRI), is primarily a medical imaging technique most commonly used in radiology to visualise the internal structure and function of the body.

NAREGI the National Research Grid Initiative in Japan, created in 2003 by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) with research and development

OCI The Office of Cyberinfrastructure (http://www.nsf.gov/od/oci/about.jspat) at the National Science Foundation coordinates and supports the acquisition, development and provision of state-of-the-art cyberinfrastructure resources, tools and services essential to the conduct of 21st century science and engineering research and education.

PET Positron emission tomography is a nuclear medicine imaging technique which produces a three-dimensional image or picture of functional processes in the body.

Science Gateway A domain-specific computing environment, typically accessed via the Web, that provides a scientific community with end-to-end support for a particular scientific workflow

SMP Symmetric Multiprocessors: a computer hardware architecture which distributes the computing load over a small number of identical processors, which share memory.

Supercomputer A computer that is considered, or was considered at the time of its introduction, to be at the forefront in terms of processing capacity, particularly speed of calculation.

TeraGrid an open scientific discovery infrastructure in the USA combining large computing resources (including supercomputers, storage, and scientific visualization systems) at nine Resource Provider partner sites to create an integrated, persistent computational resource.

Tier-0, Tier-1, Tier-2 denotes the various levels of a conceptual pyramid of HPC systems.

UPS - An Uninterruptible power supply (UPS), also known as a battery back-up, provides emergency power to connected equipment by supplying power from a separate source when utility power is not available. A UPS can be used to provide uninterrupted power to equipment, typically for 5-15 minutes until an auxiliary power supply can be turned on, utility power restored, or equipment safely shut down. While not limited to safeguarding any particular type of equipment, a UPS is typically used to protect computers, data centers, telecommunication equipment or other electrical equipment where an unexpected power disruption could cause injuries, fatalities, serious business disruption or data loss.

Visualisation the process of converting large amounts of complex, multi-dimensional data into images so people can more quickly and easily see patterns and anomalies in the data. Visualisation technologies are widely used within the HPC community to enable better understanding of the ever larger data sets that computer simulations and sensor networks are creating.

Visual computing research research into the analysis, enhancement and display of visual information to create life-like, real-time experiences and more natural ways for people to interact with computers and other devices.

Interested?

If you would like to know more or discuss a project idea, get in touch.

Get in touch