| copyright |
|
||
|---|---|---|---|
| lastupdated | 2022-01-11 | ||
| keywords | kubernetes, openshift, red hat, red hat openshift | ||
| subcollection | openshift | ||
| content-type | tutorial | ||
| services | openshift, vpc | ||
| account-plan | |||
| completion-time | 45m |
{{site.data.keyword.attribute-definition-list}}
{: #vpc_rh_tutorial} {: toc-content-type="tutorial"} {: toc-services="openshift, vpc"} {: toc-completion-time="45m"}
Create an {{site.data.keyword.openshiftlong}} cluster in your Virtual Private Cloud (VPC). {: shortdesc}
With {{site.data.keyword.openshiftlong_notm}} clusters on VPC, you can create your cluster in the next generation of the {{site.data.keyword.cloud_notm}} platform, in your Virtual Private Cloud.
- {{site.data.keyword.openshiftlong_notm}} gives you all the advantages of a managed offering for your cluster infrastructure environment, while using the {{site.data.keyword.openshiftshort}} tooling and catalog{: external} that runs on Red Hat Enterprise Linux for your app deployments.
- VPC gives you the security of a private cloud environment with the dynamic scalability of a public cloud. VPC uses the next version of {{site.data.keyword.openshiftlong_notm}} infrastructure providers, with a select group of v2 API, CLI, and console functionality.
{{site.data.keyword.openshiftshort}} worker nodes are available for paid accounts and standard clusters only. You can create {{site.data.keyword.openshiftshort}} clusters that run version 4 only. The operating system is Red Hat Enterprise Linux 7. {: note}
{: #vpc_rh_objectives}
In the tutorial lessons, you create a {{site.data.keyword.openshiftlong_notm}} cluster in a Virtual Private Cloud (VPC). Then, you access built-in {{site.data.keyword.openshiftshort}} components, deploy an app in an {{site.data.keyword.openshiftshort}} project, and expose the app on with a VPC load balancer so that external users can access the service.
{: #vpc_rh_audience}
This tutorial is for administrators who are creating a cluster in {{site.data.keyword.openshiftlong_notm}} in VPC compute for the first time. {: shortdesc}
{: #vpc_rh_prereqs}
Complete the following prerequisite steps to set up permissions and the command-line environment. {: shortdesc}
Permissions: If you are the account owner, you already have the required permissions to create a cluster and can continue to the next step. Otherwise, ask the account owner to set up the API key and assign you the minimum user permissions in {{site.data.keyword.cloud_notm}} IAM.
Command-line tools: For quick access to your resources from the command line, try the {{site.data.keyword.cloud-shell_notm}}{: external}. Otherwise, set up your local command-line environment by completing the following steps.
-
Install the {{site.data.keyword.openshiftshort}} (
oc) and Kubernetes (kubectl) CLIs. -
To work with VPC, install the
infrastructure-serviceplug-in. The prefix for running commands isibmcloud is.ibmcloud plugin install infrastructure-service
{: pre}
-
Update your {{site.data.keyword.containershort_notm}} plug-in to the latest version.
ibmcloud plugin update kubernetes-service
{: pre}
{: #vpc_rh_create_vpc_cluster} {: step}
Create an {{site.data.keyword.cloud_notm}} Virtual Private Cloud (VPC) environment. Then, create a {{site.data.keyword.openshiftlong_notm}} cluster on the VPC infrastructure. For more information about VPC, see Getting Started with Virtual Private Cloud. {: shortdesc}
-
Log in to the account, resource group, and {{site.data.keyword.cloud_notm}} region where you want to create your VPC environment. The VPC must be set up in the same multizone metro location where you want to create your cluster. In this tutorial you create a VPC in
us-south. For other supported regions, see Multizone metros for VPC clusters. If you have a federated ID, include the--ssoflag.ibmcloud login -r us-south [-g <resource_group>] [--sso]
{: pre}
-
Create a VPC for your cluster. For more information, see the docs for creating a VPC in the console or CLI.
-
Create a VPC called
myvpcand note the ID in the output. VPCs provide an isolated environment for your workloads to run within the public cloud. You can use the same VPC for multiple clusters, such as if you plan to have different clusters host separate microservices that need to communicate with each other. If you want to separate your clusters, such as for different departments, you can create a VPC for each cluster.ibmcloud is vpc-create myvpc
{: pre}
-
Create a public gateway and note the ID in the output. In the next step, you attach the public gateway to a VPC subnet, so that your worker nodes can communicate on the public network. Default {{site.data.keyword.openshiftshort}} components, such as the web console and OperatorHub, require public network access. If you skip this step, you must instead be connected to your VPC private network, such as through a VPN connection, to access the {{site.data.keyword.openshiftshort}} web console or access your cluster with
kubectlcommands.ibmcloud is public-gateway-create gateway-us-south-1 <vpc_ID> us-south-1
{: pre}
-
Create a subnet for your VPC, and note its ID. Consider the following information when you create the VPC subnet:
-
Zones: You must have one VPC subnet for each zone in your cluster. The available zones depend on the metro location that you created the VPC in. To list available zones in the region, run
ibmcloud is zones. -
IP addresses: VPC subnets provide private IP addresses for your worker nodes and load balancer services in your cluster, so make sure to create a subnet with enough IP addresses, such as 256. You can't change the number of IP addresses that a VPC subnet has later.
-
Public gateways: Include the public gateway that you previously created. You must have one public gateway for each zone in your cluster.
ibmcloud is subnet-create mysubnet1 <vpc_ID> --zone us-south-1 --ipv4-address-count 256 --public-gateway-id <gateway_ID>
{: pre}
-
-
-
Create a standard {{site.data.keyword.cos_full_notm}} instance to back up the internal registry in your cluster. In the output, note the instance ID.
ibmcloud resource service-instance-create myvpc-cos cloud-object-storage standard global
{: pre}
-
Create a cluster in your VPC in the same zone as the subnet.
- The following command creates a version 4.8 cluster in Dallas with the minimum configuration of 2 worker nodes that have at least 4 cores and 16 GB memory so that default {{site.data.keyword.openshiftshort}} components can deploy.
- By default, your cluster is created with a public and a private cloud service endpoint. You can use the public cloud service endpoint to access the Kubernetes master, such as to run
occommands, from your local machine. Your worker nodes communicate with the master on the private cloud service endpoint. For the purposes of this tutorial, do not specify the--disable-public-service-endpointflag. - For more information about the command options, see the
cluster create vpc-gen2CLI reference docs.
ibmcloud oc cluster create vpc-gen2 --name myvpc-cluster --zone us-south-1 --version 4.8_openshift --flavor bx2.4x16 --workers 2 --vpc-id <vpc_ID> --subnet-id <vpc_subnet_ID> --cos-instance <COS_CRN>
{: pre}
-
List your cluster details. Review the cluster State, check the Ingress Subdomain, and note the Master URL.
Your cluster creation might take some time to complete. After the cluster state shows Normal, the cluster network and Ingress components take about 10 more minutes to deploy and update the cluster domain that you use for the {{site.data.keyword.openshiftshort}} web console and other routes. Before you continue, wait until the cluster is ready by checking that the Ingress Subdomain follows a pattern of
<cluster_name>.<globally_unique_account_HASH>-0001.<region>.containers.appdomain.cloud.ibmcloud oc cluster get --cluster myvpc-cluster
{: pre}
-
Add yourself as a user to the {{site.data.keyword.openshiftshort}} cluster by setting the cluster context.
ibmcloud oc cluster config --cluster myvpc-cluster --admin
{: pre}
-
In your browser, navigate to the address of your Master URL and append
/console. For example,https://c0.containers.cloud.ibm.com:23652/console. If time permits, you can explore the different areas of the {{site.data.keyword.openshiftshort}} web console.
{{site.data.keyword.openshiftshort}} console overview
The rows are read from left to right. The area of the console is in the first column, the location in the console is in the second column, and the description of the console area in the third column.Area Location in console Description Administrator perspective Side navigation menu perspective switcher. From the Administrator perspective, you can manage and set up the components that your team needs to run your apps, such as projects for your workloads, networking, and operators for integrating IBM, Red Hat, 3rd party, and custom services into the cluster. For more information, see Viewing cluster information in the {{site.data.keyword.openshiftshort}} documentation.
Developer perspective Side navigation menu perspective switcher. From the Developer perspective, you can add apps to your cluster in a variety of ways, such as from Git repositories,container images, drag-and-drop or uploaded YAML files, operator catalogs, and more. The Topology view presents a unique way to visualize the workloads that run in a project and navigate their components from sidebars that aggregate related resources, including pods, services, routes, and metadata. For more information, see Developer perspective in the {{site.data.keyword.openshiftshort}} documentation.
-
From the {{site.data.keyword.openshiftshort}} web console menu bar, click your profile IAM#user.name@email.com > Copy Login Command. Display and copy the
oc logintoken command into your command line to authenticate via the CLI.Save your cluster master URL to access the {{site.data.keyword.openshiftshort}} console later. In future sessions, you can skip the
cluster configstep and copy the login command from the console instead. -
Verify that the
occommands run properly with your cluster by checking the version.oc version
{: pre}
Example output
Client Version: v4.8.0 Kubernetes Version: v1.22.4.2
{: screen}
If you can't perform operations that require Administrator permissions, such as listing all the worker nodes or pods in a cluster, download the TLS certificates and permission files for the cluster administrator by running the
ibmcloud oc cluster config --cluster myvpc-cluster --admincommand. {: tip}
{: #vpc_rh_app} {: step}
Quickly deploy a new sample app that is available to requests from inside the cluster only. {: shortdesc}
The components that you deploy by completing this lesson are shown in the following diagram.
-
Create an {{site.data.keyword.openshiftshort}} project for your Hello World app.
oc new-project hello-world
{: pre}
-
Build the sample app from the source code{: external}. With the {{site.data.keyword.openshiftshort}}
new-appcommand, you can refer to a directory in a remote repository that contains the Dockerfile and app code to build your image. The command builds the image, stores the image in the local Docker registry, and creates the app deployment configurations (dc) and services (svc). For more information about creating new apps, see the {{site.data.keyword.openshiftshort}} docs{: external}.oc new-app --name hello-world https://github.com/IBM/container-service-getting-started-wt --context-dir="Lab 1"{: pre}
-
Verify that the sample Hello World app components are created.
-
List the hello-world services and note the service name. So far, your app listens for traffic on these internal cluster IP addresses only. In the next lesson, you create a load balancer for the service so that the load balancer can forward external traffic requests to the app.
oc get svc -n hello-world
{: pre}
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world ClusterIP 172.21.xxx.xxx <none> 8080/TCP 31m
{: screen}
-
List the pods. Pods with
buildin the name are jobs that Completed as part of the new app build process. Make sure that the hello-world pod status is Running.oc get pods -n hello-world
{: pre}
Example output
NAME READY STATUS RESTARTS AGE hello-world-1-9cv7d 1/1 Running 0 30m hello-world-1-build 0/1 Completed 0 31m hello-world-1-deploy 0/1 Completed 0 31m
{: screen}
-
{: #vpc_rh_vpc_lb} {: step}
Set up a VPC load balancer to expose your app to external requests on the public network. {: shortdesc}
When you create a Kubernetes LoadBalancer service in your cluster, a VPC load balancer is automatically created in your VPC outside of your cluster. The VPC load balancer is multizonal and routes requests for your app through the private NodePorts that are automatically opened on your worker nodes. The following diagram illustrates how a user accesses an app's service through the VPC load balancer, even though your worker node is connected to only a private subnet.
Interested in using an {{site.data.keyword.openshiftshort}} route to expose your app instead? Check out How does a request via route get to my app in a VPC cluster? and Setting up public routes. {: tip}
-
Create a Kubernetes
LoadBalancerservice in your cluster to publicly expose the hello world app.oc expose dc/hello-world --type=LoadBalancer --name=hw-lb-svc --port=8080 --target-port=8080 -n hello-world
{: pre}
Example output
service "hw-lb-svc" exposed{: screen}
More about the expose parameters Parameter Description exposeExpose a Kubernetes resource, such as a deployment, as a service so that users can access the resource by using the VPC load balancer hostname. dc/<hello-world-deployment>The resource type and the name of the resource to expose with this service. --name=<hello-world-service>The name of the service. --type=LoadBalancerThe service type to create. In this lesson, you create a LoadBalancerservice.--port=<8080>The port on which the service listens for external network traffic. --target-port=<8080>The port that your app listens on and to which the service directs incoming network traffic. In this example, the target-portis the same as theport, but other apps that you create might use a different port.-n hello-worldThe namespace that your deployment is in. -
Verify that the Kubernetes
LoadBalancerservice is created successfully in your cluster. When you create the KubernetesLoadBalancerservice, a VPC load balancer is automatically created for you. The VPC load balancer assigns a hostname to your KubernetesLoadBalancerservice that you can see in the LoadBalancer Ingress field of your CLI output. In VPC, services in your cluster are assigned a hostname because the external IP address for the service is not stable.The VPC load balancer takes a few minutes to provision in your VPC. Until the VPC load balancer is ready, you can't access the Kubernetes
LoadBalancerservice through its hostname.oc describe service hw-lb-svc -n hello-world
{: pre}
Example CLI output:
NAME: hw-lb-svc Namespace: default Labels: app=hello-world-deployment Annotations: <none> Selector: app=hello-world-deployment Type: LoadBalancer IP: 172.21.xxx.xxx LoadBalancer Ingress: 1234abcd-us-south.lb.appdomain.cloud Port: <unset> 8080/TCP TargetPort: 8080/TCP NodePort: <unset> 32040/TCP Endpoints: Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 1m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 1m service-controller Ensured load balancer
{: screen}
-
Verify that the VPC load balancer is created successfully in your VPC. In the output, verify that the VPC load balancer has a Provision Status of
activeand an Operating Status ofonline.The VPC load balancer is named in the format
kube-<cluster_ID>-<kubernetes_lb_service_UID>. To see your cluster ID, runibmcloud oc cluster get --cluster <cluster_name>. To see the KubernetesLoadBalancerservice UID, runkubectl get svc hw-lb-svc -o yamland look for the metadata.uid field in the output. {: tip}ibmcloud is load-balancers
{: pre}
In the following example CLI output, the VPC load balancer that is named
kube-bsaucubd07dhl66e4tgg-1f4f408ce6d2485499bcbdec0fa2d306is created for the KubernetesLoadBalancerservice:ID Name Family Subnets Is public Provision status Operating status Resource group r006-d044af9b-92bf-4047-8f77-a7b86efcb923 kube-bsaucubd07dhl66e4tgg-1f4f408ce6d2485499bcbdec0fa2d306 Application mysubnet-us-south-3 true active online default{: screen}
-
Send a request to your app by curling the hostname and port of the Kubernetes
LoadBalancerservice that is assigned by the VPC load balancer that you found in step 2. Example:curl 1234abcd-us-south.lb.appdomain.cloud:8080
{: pre}
Example output
Hello world from hello-world-deployment-5fd7787c79-sl9hn! Your app is up and running in a cluster!
{: screen}
-
Optional: To clean up the resources that you created in this lesson, you can use the labels that are assigned to each app.
-
List all the resources for each app in the
hello-worldproject.oc get all -l app=hello-world -o name -n hello-world
{: pre}
Example output
pod/hello-world-1-dh2ff replicationcontroller/hello-world-1 service/hello-world deploymentconfig.apps.openshift.io/hello-world buildconfig.build.openshift.io/hello-world build.build.openshift.io/hello-world-1 imagestream.image.openshift.io/hello-world imagestream.image.openshift.io/node{: screen}
-
Delete all the resources that you created.
oc delete all -l app=hello-world -n hello-world
{: pre}
-
{: #vpc_rh_next}
Now that you have a VPC cluster, learn more about what you can do. {: shortdesc}
- Backing up your internal image registry to {{site.data.keyword.cos_full_notm}}
- Overview of the differences between classic and VPC clusters
- VPC cluster limitations
- About the v2 API
Need help, have questions, or want to give feedback on VPC clusters? Try posting in the Slack channel{: external}. {: tip}

