2018 Wiwynn TechDay Japan & USA Recap

Wiwynn TechDay 2018 Japan and USA have concluded successfully. From the announcements to great agendas, networking, and for the Speaking crew, lots of great conversations. There is no possible way to recap everything in one episode but we tried our best to cram in a few of the highlights.


Cord ONF - Edge Cloud

CORD ONF Talks About Edge Cloud

William Snow, Chief Development officer, Open Networking Foundation


The CORD platform allows for rapid services innovation and deployment as evidenced by our success as a community to develop open source VNFs as well as integrated PoCs at a very rapid pace.

Check out presentation for further information.

EN Video

Intel RSD

Discover Disaggregated Servers and Software Defined Data Center with Intel® Rack Scale Design

Christian Buerger, Marketing Director,
Software Defined Datacenter Group, Intel


Intel® RSD is a logical architecture. The key concept is to disaggregate hardware, such as compute, storage and network resources, from preconfigured servers and deploy them in sharable resource pools.

EN Video

Wiwynn cluster management

Wiwynn Cluster Manager with 19″ and OCP Accepted Building Blocks

Ethan SL Yang, Deputy Manager, Wiwynn Corporation


Wiwynn® Cluster Manager is a system software that makes data center easier to manage with features such as resource planning, massive firmware and OS deployment, real-time rack level visual monitoring.

EN Video

Wiwynn Compute Accelerator

Wiwynn Compute Accelerators Introduction

Using Multi-GPU Accelerators for AI Practice – example: Face Swap

  • Over 100,000 photos for each person ​
  • 12 to 15 hours for each person​
  • How we reduce the training time?

Check out Wiwynn Multiple-GPU Accelerators

EN Video

GPU accelerated deep learning containers

AI Computing on NVIDIA GPUs

Patrick Donelly, Solutions Architect, NVIDIA


For deep learning, NVIDIA GPU Cloud empowers AI researchers with performance-engineered containers featuring deep learning software such as TensorFlow, PyTorch, MXNet, TensorRT, and more. NVIDIA also provides a wide range of GPU-accelerated platforms you can use to accelerate deep learning training and inference application workloads.

EN Video

Penguin Computing building for product management

Penguin Computing – Building for HyperScale

William Wu, Director of Product Management
Penguin Computing


William talks about what Penguin Computing can offer for AI by sharing the Application of HPC Discipline and Wiwynn Server Validation and L10/L11 Test Item / Coverage.

Check out presentation for further information

EN Video

Wiwynn GPU Server Products

Wiwynn offers a complete GPU server lineups, which includes the 21 inch 4U Dual Socket GPU Server for OCP users, and the 19 inch 4U8G Dual Socket GPU Server for traditional 19 inch Rack user.

If you have had sufficient servers and just want to scale up your GPU capability, we have GPU Accelerator for you.​ The Gen1 and Gen2 of XC200 series, the 4U16X GPU Accelerator, are great choices. They are both disaggregated systems with solely GPU cards inside the system.

Wiwynn SV7400

SV7400 Series

4U8G Dual Socket GPU Server
  • PCIe 3.0 x16 slots ​
  • Large simulations and efficient training​
  • Integrated field proven Project Olympus server board​
  • Support application workload 1:8​

Wiwynn SV500

SV500 Series

4U8G Dual Socket GPU Server​
  • PCIe 3.0 x16 slots ​
  • Large simulations and efficient training​
  • Integrated field proven Project Olympus server board​
  • Support application workload 1:8​

Wiwynn XC200G1

XC200 Series

4U16X GPU Accelerator​
  • Disaggregated PCIe Accelerator ​
  • 16 PCIe 3.0 x16 add-in cards​
  • Flexible configuration for up to 4 servers connection​
  • 4 drawers for tool-less and easy maintenance ​
  • Support application workload 4:16, 2:8, 1:16​