Data Center as a PCB
- Create a Cloud Computing server which might be able to run with 35 C degres ambiant temperature. This temperature is a good average of the peak temperature observed in Western Europe during summer timeframe.
- Reduce as much as possible cabling between compute nodes, storage, and network infrastructure to lower human mistakes during maintenance operation and improve network performances.
- Share I/O devices (Network NIC, storage interface) which are beginning to be very costly compared to compute resources.
- Reduce compute node to sub 600 $US including the interconnect.
- Use industry standard components which can run at high temperatures: AMD R-Series Bald Eagle with ECC support (Tj up to 95 C Degrees)
- Use local PCI-e Fabric to interconnect compute nodes to lower system pricing and improve performances
- Use SR-IOV technology from PLX technology to share I/O devices
- Put everything on the same PCB, and just interconnect them together through PCI-e or 40 Gb/s ethernet
- 48 cores in 1 U through 12 local compute nodes
- 384 GB ECC main memory DDR3
- 48Gb/s local drive bandwidth
- 32 Gb/s interconnect per node (low latency)
- 384 Gb/s local fabric
- 80 Gb/s max ethernet
- Everything soldered down
We Rethought Everything
They are making this project alive