Wiwynn TechDay 2018 Japan and USA have concluded successfully. From the announcements to great agendas, networking, and for the Speaking crew, lots of great conversations. There is no possible way to recap everything in one episode but we tried our best to cram in a few of the highlights.
CORD ONF Talks About Edge Cloud
William Snow, Chief Development officer, Open Networking Foundation
The CORD platform allows for rapid services innovation and deployment as evidenced by our success as a community to develop open source VNFs as well as integrated PoCs at a very rapid pace.
Check out presentation for further information.
Discover Disaggregated Servers and Software Defined Data Center with Intel® Rack Scale Design
Christian Buerger, Marketing Director,
Software Defined Datacenter Group, Intel
Intel® RSD is a logical architecture. The key concept is to disaggregate hardware, such as compute, storage and network resources, from preconfigured servers and deploy them in sharable resource pools.
Wiwynn Cluster Manager with 19″ and OCP Accepted Building Blocks
Ethan SL Yang, Deputy Manager, Wiwynn Corporation
Wiwynn® Cluster Manager is a system software that makes data center easier to manage with features such as resource planning, massive firmware and OS deployment, real-time rack level visual monitoring.
Wiwynn Compute Accelerators Introduction
Using Multi-GPU Accelerators for AI Practice – example: Face Swap
- Over 100,000 photos for each person
- 12 to 15 hours for each person
- How we reduce the training time?
Check out Wiwynn Multiple-GPU Accelerators
AI Computing on NVIDIA GPUs
Patrick Donelly, Solutions Architect, NVIDIA
For deep learning, NVIDIA GPU Cloud empowers AI researchers with performance-engineered containers featuring deep learning software such as TensorFlow, PyTorch, MXNet, TensorRT, and more. NVIDIA also provides a wide range of GPU-accelerated platforms you can use to accelerate deep learning training and inference application workloads.
Penguin Computing – Building for HyperScale
William Wu, Director of Product Management
William talks about what Penguin Computing can offer for AI by sharing the Application of HPC Discipline and Wiwynn Server Validation and L10/L11 Test Item / Coverage.
Check out presentation for further information
Wiwynn GPU Server Products
Wiwynn offers a complete GPU server lineups, which includes the 21 inch 4U Dual Socket GPU Server for OCP users, and the 19 inch 4U8G Dual Socket GPU Server for traditional 19 inch Rack user.
If you have had sufficient servers and just want to scale up your GPU capability, we have GPU Accelerator for you. The Gen1 and Gen2 of XC200 series, the 4U16X GPU Accelerator, are great choices. They are both disaggregated systems with solely GPU cards inside the system.