White Papers

As a company dedicated to partner success, we realise how important it is for our clients to fully understand new products and technologies before buying them. This is why Boston regularly publish technical whitepapers, giving customers a broader overview of the latest technologies Boston bring to market.

Please use the links below to view our latest white papers.

AI Inferencing with AMD EPYC Processors Whitepaper

Everywhere you look, artificial intelligence powers our business world. Once an aspirational endeavour, vast leaps in computer power have turned hopes into reality. Read about AMD's AI accelerator.

Released: 01 March, 2024

What is Intel® Optane™ Persistent Memory (PMem) 200 Series?

Intel® Optane™ Persistent Memory (PMem) 200 is the second generation of Intel’s DC Persistent Memory (DCPMM) which is an emerging technology where non-volatile media is placed onto a Dual In-Line Memory Module (DIMM) and installed on the memory bus, traditionally used only for volatile memory. Read our whitepaper to find out what is Intel Optane PMem and why you need it...

Released: 13 July, 2022

Micron 7300 and 7400 NVMe SSDs: Selecting the right solution for your needs

Whether your applications pull terabytes of data from disparate databases or you are building data center infrastructure to accommodate immense scale, planning and implementing a successful flash-first strategy is imperative. This guide explores the feature differences between the Micron 7300 and 7400 SSDs and help you decide which of these SSDs is the best fit for your workloads.

Released: 03 November, 2021

Moor Report - The Graphcore Second Generation IPU

Graphcore, the U.K.-based startup that launched the Intelligence Processing Unit (IPU) for AI acceleration in 2018, has introduced the IPU-Machine. This second-generation platform has greater processing power, more memory and built-in scalability for handling extremely large parallel processing workloads. The well-funded startup has a blue-ribbon pedigree of engineers, advisers and investors, and enjoys a valuation approaching $2 billion. The new MK2 part, manufactured by TSMC, is a massively parallel 59.4 B transistor processor. It delivers some 250 Trillion Operations per Second (TOPS) across 1,472 cores and 900MB of In-Processor Memory interconnected across a 2.8Tb/s low-latency fabric.

Released: 06 April, 2021

Weka AI Reference Architecture with NVIDIA DGX A100 Systems

The Weka AI™ reference architecture, powered by NVIDIA DGX A100 systems and Weka's industry leading file system WekaFS™, was developed and verified by Weka and NVIDIA.

Released: 15 January, 2021

How HCI Simplifies IT and Lowers Costs

Early computers integrated computing, storage, and networking resources in a single system. As the need for capacity grew, these elements were disaggregated into separate fiefdoms within the IT infrastructure, making systems more capable but harder to manage. Over the past decade, a few pioneers have pursued the concept of re-integrating these resources into a single system that is more capable and easier to manage. This paper explores that concept, now known as Hyper-Converged Infrastructure (HCI), and demonstrates how it can make IT operations more agile while reducing overall expenses

Released: 29 October, 2020

NVIDIA DGX A100 System Architecture White Paper

Built on the brand new NVIDIA A100 Tensor Core GPU, NVIDIA DGX™ A100 is the third generation of DGX systems. Featuring 5 petaFLOPS of AI performance, DGX A100 excels on all AI workloads– analytics, training, and inference–allowing organizations to standardize on a single system that can speed through any type of AI task and dynamically adjust to changing compute needs over time. And with the fastest I/O architecture of any DGX system, NVIDIA DGX A100 is the foundational building block for large AI clusters such as NVIDIA DGX SuperPOD, the enterprise blueprint for scalable AI infrastructure that can scale to hundreds or thousands of nodes to meet the biggest challenges. This unmatched flexibility reduces costs, increases scalability, and makes DGX A100 the universal system for AI infrastructure. In this white paper, we’ll take a look at the design and architecture of DGX A100.

Released: 05 August, 2020

vScaler AI Reference Architecure

The convergence of Artificial Intelligence (AI) and High Performance Computing (HPC) has been a driving factor of broader adoption of HPC by a wide variety of industries. The time is ripe for AI, and organisations looking to gain an edge in business are turning more and more to AI development to build their next generation of products and services.

Released: 03 February, 2020

CFD - Know Your Limits

Formula 1’s CFD restriction regime has been shaken up big time as the FIA looks to cut the costs of aerodynamic development.

Released: 02 January, 2020

The Evolution of Cooling - Why Immersion is the future

The faster the processor is, the hotter it tends to be when in use. Computer cooling is required to remove the excess heat produced by computer components, to keep them within safe operating temperatures - find out about how liquid cooling has evolved in our white paper...

Released: 23 June, 2019

Boston Intel® Select Solution for Simulation and Modeling

Build your supercomputing infrastructure on Boston’s extensive industry and design expertise for High Performance Computing applications.

Released: 04 April, 2019

vScaler AI Reference Architecture

The convergence of Artificial Intelligence (AI) and High Performance Computing (HPC) has been a driving factor of broader adoption of HPC by a wide variety of industries.

Released: 19 March, 2019

The Future-Forward Platform Foundation for Agile Digital World

Across an evolving digital world, disruptive and emerging technology trends in business, industry, science, and entertainment increasingly impact the world's economies. By 2020, the success of half the world's Global 2000 companies will depend on their abilities to create digitally enhanced products, services, and experiences, and large organisations expect to see an 80 percent increase in their digital revenues, all driven by advancements in technology and usage models they enable.

Released: 11 November, 2018

An Optimized Entry-Level Lustre Solution in a Small Form Factor

The performance, scalability and ease of manageability provided by Intel EE for Lustre software make it an excellent choice for entry-level HPC workloads, compared to NFS. The small form factor architecture can perfectly fit the workloads, growing as the workload grows, with the added benefits of excellent performance and manageability.

Released: 25 April, 2017

The future of storage as we know it

Originally released as a version 1 specification by the NVM Express Work Group in March 2011 and through several minor revisions since, NVMe has gone from conception to maturity and is now tipped to be one of the most important storage technologies for both server and client computing in the coming decade and beyond.

Released: 24 March, 2017

Intel Xeon E5-2600 v3 Codename Haswell-EP Launch

It's that time of year again when Intel finally release their latest enterprise processor for the dual processor segment to the eagerly awaiting professional market. Following on in the traditional early September launch time frame, the Xeon E5-2600 v3 processor series, codename Haswell-EP has been officially launched, finally allowing us at Bostonlabs to go through all the exciting details of the processors which we’ve been testing secretly in our labs for some time.

Released: 30 September, 2014

Commodity hardware design guide on Boston Limited hardware - Citrix validated solution

This guide, produced by Citrix India, provides high-level design details that describes the architecture for XenDesktop 7.1 Citrix Validated Solution running on Commodity Hardware by Boston. The architecture is based on the fundamentals of how cloud computing works in conjunction with commodity hardware to considerably lower the total cost of ownership for a Citrix XenDesktop environment. This guide has been created through architectural design best practices obtained from Citrix Consulting Services and through lab testing, and is intended to provide guidance for solution evaluation.

Released: 07 July, 2014

Cost Efficient VDI on Commodity Hardware Whitepaper - Intel Edition (Citrix Partner Huddle, India)

By partnering with Citrix India and leveraging commodity hardware by Supermicro, Boston Limited has designed and validated a cost-effective yet robust virtual desktop infrastructure that is scalable from 500 – 10,000 users. Following on from the launch of our VDI solution at CeBIT 2014 (below), Boston has developed another Citrix XenDesktop7-based VDI solution to explore the performance benefits of using class-leading Intel Xeon E5-2600 V2 series processors.

Released: 24 April, 2014

Cost Efficient VDI on Commodity Hardware Whitepaper (CeBIT 2014)

An increasing number of enterprises are looking towards desktop virtualization to help them respond to rising IT costs, security concerns, the user demands of BYOD and mobile working strategies. But can a desktop virtualization solution have a lower or equivalent Total Cost of Ownership (TCO) when compared to the traditional approach of procuring physical desktops? Is there a solution which delivers all the benefits of scalability and performance while maintaining a lower TCO? – Leveraging commodity hardware is the answer.

Released: 10 March, 2014

The Boston Viridis ARM® Server: Addressing the Power Challenges of Exascale?

At Boston we believe we are at the inflection point within the supercomputing industry. There are powerful economic drivers disrupting the dominance of the x86 server and just as old vector supercomputers were replaced by x86 commodity supercomputers, the ARM platform looks set to become the next disruptive technology in the commodity chain.

Released: 19 November, 2012

LSI MegaRAID CacheCade Pro 2.0 Software Evaluation

LSI® MegaRAID® CacheCade® Pro 2.0 software promises to be the answer to IT managers' serious pain points with storage, those related to random I/O performance. This new functionality is supported on several LSI SAS 6Gb/s RAID controller lines and adds SSD caching functionality to the already impressive feature set to help improve performance and negate some of the pitfalls of traditional hard disk technology. Today it is possible to purchase magnetic hard disks with capacities of up to 3TB for under £300 but they struggle to provide 200 IOPS, while some SSDs easily achieve 50,000 IOPs but cost 5-10 times that for only a few hundred gigabytes of capacity.

Released: 25 June, 2012

Revolutionising the data centre with application-specific servers based on ARM® processors

Web giants such as Google and Facebook are inching towards the Arctic Circle, building their latest data centres in countries such as Finland and Sweden to cope with the exponential demand for their Internet services. What was forbidden terrain for agriculture and manufacturing is now home to power hungry server farms that not only need sustainable sources of energy, but an extremely cold climate to chill servers. When Facebook announced that it was building a data centre in Lulea, Sweden which is less than 100km south of the Arctic Circle, the Guardian reported that 'each of Facebook's US data centres is estimated to use the same amount of electricity as 30,000 US homes. Energy consumption of warehouses run by companies such as Facebook, Google and Amazon, is among the fastest growing sources of global electricity demand.

Released: 15 June, 2012

Find your solution

Test out any of our solutions at Boston Labs

To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available

Contact us

ISC 2024

Latest Event

ISC 2024 | 13th - 15th May 2024, Congress Center, Hamburg

International Super Computing is a can't miss event for anyone interested in HPC, tech, and more.

more info