Graphcore

Graphcore created a completely new processor, the IPU, specifically designed for AI compute. The IPU's unique architecture lets AI researchers undertake entirely new types of work to drive the next advances in machine intelligence.

VENDOR WEBSITEBOOK AN ACCESS IPU SLOT

 

 

"
Second generation IPU systems for AI infrastructure at scale

 

The Next IPU: Introducing Bow

Graphcore's new 3rd generation systems are based on the world's first 3D Wafer-on-Wafer processor. The Bow IPU delivers a huge power and efficiency boost, enabling significant performance improvements for real-world AI applications. The Bow Pod systems deliver up to 40% higher performance and 16% better power efficiency - all for the same price and with no changes to existing software.

BOOK AN ACCESS IPU SLOTVIEW PRODUCT

The Bow-2000 IPU Machine

Based on the new Bow IPU processor, this remarkably efficient system delivers up to 1.4 petaFLOPS of AI performance in a flexible and practical modular design, a standard 1U server blade. A scalable compute and memory communication architecture enables these units to be connected in large numbers to populate data centres and expand to supercomputing scale.

DOWNLOAD PRODUCT BRIEFCONTACT OUR TEAMVIEW PRODUCT

 

BOW-POD2000

 

BOW-POD64

Graphcore Bow POD16

The Bow Pod16 is ideal for exploration. It provides all the power, performance and flexibility required to fast track the prototyping stage, rapidly moving to production. Bow Pod16 is the ideal starting point for building better, more innovative AI solutions with IPUs, in any machine learning field, including language and vision, exploring GNNs and LSTMs or creating something entirely new.

DOWNLOAD PRODUCT BRIEFCONTACT OUR TEAM

 

BOW-POD64

Graphcore Bow POD64

The Bow Pod64 system features 16 Bow-2000 machines, each containing 4 of our pioneering Bow IPU processors. This innovative IPU is the world’s first processor to be manufactured using Wafer-on-Wafer (WoW) technology, taking the benefits of the proven IPU technology to the next level.

DOWNLOAD PRODUCT BRIEFCONTACT OUR TEAM

Graphcore Bow POD256

The Bow Pod256 system is the solution for innovators ready to grow their capacity to supercomputing scale. It delivers massive efficiency and productivity gains by enabling large model training runs to be completed in hours or minutes instead of months or weeks. Bow Pod256 delivers AI at scale for production deployment in enterprise data centres, as well as private and public clouds.

DOWNLOAD PRODUCT BRIEFCONTACT OUR TEAM

BOW-POD256

Graphcore IPU-M2000

Graphcore's Intelligence Processing Unit (IPU) hardware and Poplar® software helps innovators create next generation machine intelligence solutions. The IPU is the first processor to be designed specifically for Machine Intelligence and offers significant performance advantages compared to other computing hardware, typically used inartificial intelligence. In addition to outperforming other technologies at today’s most common workloads, the Graphcore IPU has been architected to excel at next-generation AI applications – including highly sparse models.

 

 

A NEW ERA OF AI RESEARCH

 

Material scientists, physicists, roboticists, genomics specialists, epidemiologists, cosmologists, computer scientists and scientific researchers of all kinds are taking advantage of the massively parallel computing power of the IPU to unlock completely new, unlimited directions of research.

Leading universities and research institutions around the globe are using IPUs at scale to make new breakthroughs with machine intelligence.

Speak to a member of our team for more information on how Boston and Graphcore can help your organisation stand out within the field.

BOOK YOUR TEST DRIVEREAD CASE STUDY

 

GRAPHCORE FAMILY

 

 

 

IPU Compute at Data Centre Scale

The IPU-M2000 has a flexible, modular design, so you can start with one and scale to thousands. Directly connect a single system to an existing CPU server, add up to eight connected IPU-M2000s or with racks of 16 tightly interconnected IPU-M2000s in IPU-POD64 systems, grow to supercomputing scale thanks to the high-bandwidth, near-zero latency IPU-Fabric™ interconnect architecture built into the box.

FIND OUT MORE VIEW PRODUCT

MIMD Architecture and Opportunities for AI/ML Optimisation in Finance using IPUs

To date accelerations of algorithms have revolved around SIMD architecture GPGPU, causing many classes of problems to be ‘rephrased’ in a manner to suit it. They work well, but when state needs to be considered, such as MCMC (Markov Chain Monte Carlo), natural language processing, neural net modelling and many other increasingly distributed, federated, forms of calculations, other novel approaches such as MIMD need to be considered.
Our experts will discuss how the Graphcore Intelligence Processing Unit (IPU) enables finance firms to use models of greater complexity and iterate faster than is currently possible with legacy processors.

WATCH ON DEMAND

 

IPU-M2000

The IPU-M2000 is a revolutionary next-generation system solution built with the Colossus MK2 IPU. It packs 1 PetaFlop of AI compute and up to 450GB Exchange-Memory™ in a slim 1U blade for the most demanding machine intelligence workloads. Designed from the ground up for high performance training and inference workloads, the IPU-M2000 unifies your AI infrastructure for maximum datacentre utilisation.

VIEW PRODUCTVIEW DATASHEETFIND OUT MORE

 

IPU-M2000

 

 

 

IPU-POD16

IPU-POD16

IPU-POD16 opens up a new world of machine intelligence innovation. Ideal for exploration and experimentation, the IPU-POD16 is the perfect new tool to develop concepts and pilots consolidating both training and inference in one affordable system.

VIEW PRODUCTDOWNLOAD DATASHEETFIND OUT MORE

 

IPU-POD64

IPU-POD64 delivers ultimate flexibility to maximise all available space and power in your datacentre, no matter how it is provisioned. IPU-POD64 brings together world-class IPU compute with a choice of best in class datacenter technologies and systems in flexible, pre-qualified configurations, to ensure your datacentre is operating with maximum efficiency and performance.

DOWNLOAD DATASHEETFIND OUT MORE

 

 

 

IPU-M2000

BOSTON GRAPHCORE POPLAR®

The Boston Graphcore Poplar Server is fully qualified by Graphcore as the default server to host for IPU-POD systems. The Boston Poplar is a head node for Graphcore IPU-POD confugurations, currently available in IPUPOD16 configuration with 4 x IPU-M2000s, as well as an IPU-POD64 configuration with 16 x IPU-M2000s.

DOWNLOAD DATASHEETFIND OUT MORE

 

Get in touch to discuss our range of solutions

+91 22 5002 3262

Find your solution

Test out any of our solutions at Boston Labs

To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available

Contact us

There are no events coming up right now.