Articles


Learn how to set up a GPU-enabled virtual server instance (VSI) on a Virtual Private Cloud (VPC) and deploy RAPIDS using IBM Schematics.

The GPU-enabled family of profiles provides on-demand, cost-effective access to NVIDIA GPUs. GPUs help to accelerate the processing time required for compute-intensive workloads, such as artificial intelligence (AI), machine learning, inferencing, and more. To use the GPUs, you need the appropriate toolchain – such as CUDA (an acronym for Compute Unified Device Architecture) – ready.

Let’s start with a simple question.

Source de l’article sur DZONE

You can expose your app to the public by setting up a Kubernetes LoadBalancer service in your IBM Cloud Kubernetes Service cluster. When you expose your app, a Load Balancer for VPC that routes requests to your app is automatically created for you in your VPC outside of your cluster.

In this post, you will provision an IBM Cloud Kubernetes Service cluster spanning two private subnets (each subnet in a different zone), deploy an application using a container image stored in an IBM Cloud Container Registry and expose the app via a VPC load balancer deployed to a public subnet in a different zone. Sound complex? Don’t worry, you will provision and deploy the app using Terraform scripts.

Source de l’article sur DZONE