Graphcore's new 3rd generation systems are based on the world's first 3D Wafer-on-Wafer processor. The Bow IPU delivers a huge power and efficiency boost, enabling significant performance improvements for real-world AI applications. The Bow Pod systems deliver up to 40% higher performance and 16% better power efficiency - all for the same price and with no changes to existing software.
Based on the new Bow IPU processor, this remarkably efficient system delivers up to 1.4 petaFLOPS of AI performance in a flexible and practical modular design, a standard 1U server blade. A scalable compute and memory communication architecture enables these units to be connected in large numbers to populate data centres and expand to supercomputing scale.
The Bow Pod16 is ideal for exploration. It provides all the power, performance and flexibility required to fast track the prototyping stage, rapidly moving to production. Bow Pod16 is the ideal starting point for building better, more innovative AI solutions with IPUs, in any machine learning field, including language and vision, exploring GNNs and LSTMs or creating something entirely new.
The Bow Pod64 system features 16 Bow-2000 machines, each containing 4 of our pioneering Bow IPU processors. This innovative IPU is the world’s first processor to be manufactured using Wafer-on-Wafer (WoW) technology, taking the benefits of the proven IPU technology to the next level.
The Bow Pod256 system is the solution for innovators ready to grow their capacity to supercomputing scale. It delivers massive efficiency and productivity gains by enabling large model training runs to be completed in hours or minutes instead of months or weeks. Bow Pod256 delivers AI at scale for production deployment in enterprise data centres, as well as private and public clouds.
Graphcore's Intelligence Processing Unit (IPU) hardware and Poplar® software helps innovators create next generation machine intelligence solutions. The IPU is the first processor to be designed specifically for Machine Intelligence and offers significant performance advantages compared to other computing hardware, typically used inartificial intelligence. In addition to outperforming other technologies at today’s most common workloads, the Graphcore IPU has been architected to excel at next-generation AI applications – including highly sparse models.
Material scientists, physicists, roboticists, genomics specialists, epidemiologists, cosmologists, computer scientists and scientific researchers of all kinds are taking advantage of the massively parallel computing power of the IPU to unlock completely new, unlimited directions of research.
Leading universities and research institutions around the globe are using IPUs at scale to make new breakthroughs with machine intelligence.
Speak to a member of our team for more information on how Boston and Graphcore can help your organisation stand out within the field.
The IPU-M2000 has a flexible, modular design, so you can start with one and scale to thousands. Directly connect a single system to an existing CPU server, add up to eight connected IPU-M2000s or with racks of 16 tightly interconnected IPU-M2000s in IPU-POD64 systems, grow to supercomputing scale thanks to the high-bandwidth, near-zero latency IPU-Fabric™ interconnect architecture built into the box.
To date accelerations of algorithms have revolved around SIMD architecture GPGPU, causing many classes of problems to be ‘rephrased’ in a manner to suit it. They work well, but when state needs to be considered, such as MCMC (Markov Chain Monte Carlo), natural language processing, neural net modelling and many other increasingly distributed, federated, forms of calculations, other novel approaches such as MIMD need to be considered.
Our experts will discuss how the Graphcore Intelligence Processing Unit (IPU) enables finance firms to use models of greater complexity and iterate faster than is currently possible with legacy processors.
The IPU-M2000 is a revolutionary next-generation system solution built with the Colossus MK2 IPU. It packs 1 PetaFlop of AI compute and up to 450GB Exchange-Memory™ in a slim 1U blade for the most demanding machine intelligence workloads. Designed from the ground up for high performance training and inference workloads, the IPU-M2000 unifies your AI infrastructure for maximum datacentre utilisation.
IPU-POD16 opens up a new world of machine intelligence innovation. Ideal for exploration and experimentation, the IPU-POD16 is the perfect new tool to develop concepts and pilots consolidating both training and inference in one affordable system.
IPU-POD64 delivers ultimate flexibility to maximise all available space and power in your datacentre, no matter how it is provisioned. IPU-POD64 brings together world-class IPU compute with a choice of best in class datacenter technologies and systems in flexible, pre-qualified configurations, to ensure your datacentre is operating with maximum efficiency and performance.
The Boston Graphcore Poplar Server is fully qualified by Graphcore as the default server to host for IPU-POD systems. The Boston Poplar is a head node for Graphcore IPU-POD confugurations, currently available in IPUPOD16 configuration with 4 x IPU-M2000s, as well as an IPU-POD64 configuration with 16 x IPU-M2000s.
To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available
We are excited to invite you to our upcoming webinar on Supermicro liquid-cooled workstations, scheduled for 06.10.23. This webinar is tailored to data scientists and GPU users who seek hardware or software solutions and want to explore the possibilities of liquid-cooled workstations.