What I Infer As 6 Keys to Effectively Modularize Cloud Infrastructure Hardware

Intrigued by the Lego Building blocks, I started to learn their beginnings. What I found out was that 1947 was an important year for the company as they purchased their first plastic injection molding machine which led them to mass manufacture bricks. Following that they patented the interlocking principle of LEGO bricks in 1957, and in 1958, the stud-and-coupling system was patented as well, which adds significant structural stability.

The emphasis here is that the size and form of the brick and the foundational principle that determined how they would come together supported by an able process and systems that propelled the exponential growth of this concept.

Taking a leaf of out that history, I stopped to ask the basic question – How do we achieve optimal modularity in Cloud Infrastructure hardware? Let us dive in to see what we learnt:

Achieving modularity in hardware

Hardware: Achieving modularity in hardware is determined by some key attributes that are explained below:

  1. Usability or Attach rate:  How often a particular feature gets used is important to determining whether a component gets modularized or is developed as a monolithic design element specifically that relates to cost of the design. The reason to this is because the usability factor will drive the volume of the module which then directly determines the cost of the module. A general rule is to optimize the design as much as possible and build it into the system if the usability or attach rate factor is high and if it is a vendor unique implementation. Where exactly the line gets drawn to determine modularity is built in is governed by the tradeoff analysis of the cost to modularity, flexibility in vendor choices needed to seed creativity, and finally the attach rate of the module.
  2. Flexibility or Choice or Creativity:  Supposing there are many companies that offer the same feature and there is an option that as a system that you want to provide to have the flexibility to use anyone’s design to attain that particular feature then there is certainly a case for modularity. For e.g. consider the network interface controller or the storage controller that is part of the server. There are different companies that offer the feature and focus on a specific attribute that is fine tuned for certain workloads. When the system is catering to a diverse set of workloads, having the option to fit in either of these company’s cards to optimize performance is preferred.
  3. Quality:  Sometimes it helps to drive accountability by having a clear delineation on who is responsible for a particular sub-component. Especially when quality is key and each sub-component is being tracked to hit a certain MTBF or AIR contribution that leads to the total system downtime, it is better to have clear accountability built around all the sub-components and establish ownership.
  4. Availability of Proof of Concept:  With the times for development shrinking and ever increasing need to get to deployment faster, the availability of an early proof of concept is getting to be an important factor in product development. This is true when the proof of concept acts as a development bed for not only the hardware and firmware interface but also the entire solution stacks and for fine tuning performance of those stacks.
  5. Adaptability to Technology shifts:  A server contains different technologies and very rarely they all shift at the same time. In the current scenario, the system shifts from one generation to the next is mostly determined by the processor technology shift. However, there are other technology who have their own timing and are only loosely tied to the processor technology shift. So if we are able decouple the different technologies involved into different subsystems that are in their own modules then that allows us to control the specific deployment of the technology that provides the most value and the design is more adaptable to the differing technology cycles. Usually technology shifts cause significant capital investment and it is key to be able to focus on the ones that provide the most value.
  6. Serviceability:  In some instances there are strict requirements for serviceability of certain components in the server. This requirement then drives those components to be available as modules. For e.g. Power supplies, fans, hard drives, memory, processors are some of the components that fall into this category as they need to be easily replaceable by a data center technician with little prior knowledge of the system. Considering the cost of the servicing and the time it takes for the system to be down while servicing, there might be a reason to make the particular feature easily serviceable.

There you have it – The 6 keys that that are used to optimize the Cloud Infrastructure hardware are Usability, Flexibility, Quality, Availability, Adaptability, and Serviceability.