If you want to run a local Red Hat OpenShift on your Laptop then this guide is written just for you. This guide is not meant for Production setup or any use where actual customer traffic is anticipated. CRC is a tool created for deployment of minimal OpenShift Container Platform 4 cluster and Podman container runtime on a local computer. This is fit for development and testing purposes only. Local OpenShift is mainly targeted at running on developers’ desktops. For deployment of Production grade OpenShift Container Platform use cases, refer to official Red Hat documentation on using the full OpenShift installer.

We also have guide on running Red Hat OpenShift Container Platform in KVM virtualization;

Here are the key points to note about Local Red Hat OpenShift Container platform using CRC:

  • The cluster is ephemeral
  • Both the control plane and worker node runs on a single node
  • The Cluster Monitoring Operator is disabled by default.
  • There is no supported upgrade path to newer OpenShift Container Platform versions
  • The cluster uses 2 DNS domain names, crc.testing and apps-crc.testing
  • crc.testing domain is for core OpenShift services and apps-crc.testing is for applications deployed on the cluster.
  • The cluster uses the 172 address range for internal cluster communication.

Requirements for running Local OpenShift Container Platform:

  • A computer with AMD64 and Intel 64 processor
  • Physical CPU cores: 4
  • Free memory: 9 GB
  • Disk space: 35 GB

1. Local Computer Preparation

We shall be performing this installation on a Red Hat Linux 9 system.

$ cat /etc/redhat-release
Red Hat Enterprise Linux release 9.10 (Plow)

OS specifications are as shared below:Ezoic

[jkmutai@crc ~]$ free -h
               total        used        free      shared  buff/cache   available
Mem:            31Gi       238Mi        30Gi       8.0Mi       282Mi        30Gi
Swap:            9Gi          0B         9Gi

[jkmutai@crc ~]$ grep -c ^processor /proc/cpuinfo
8

[jkmutai@crc ~]$ ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether b2:42:4e:64:fb:17 brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    inet 192.168.207.2/24 brd 192.168.207.255 scope global noprefixroute ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::b042:4eff:fe64:fb17/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

For RHEL register system

If you’re performing this setup on RHEL system, use the commands below to register the system.

Ezoic
$ sudo subscription-manager register --auto-attach
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: <RH-USERNAME>
Password: <RH-PASSWORD>
The registered system name is: crc.example.com
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux for x86_64
Status:       Subscribed

The command will automatically associate any available subscription matching the system. You can also provide username and password in one command line.

sudo subscription-manager register --username <username> --password <password> --auto-attach

If you would like to register system without immediate subscription attachment, then run:

sudo subscription-manager register

Once the system is registered, attach a subscription from a specific pool using the following command:

sudo subscription-manager attach --pool=<POOL_ID>

To find which pools are available in the system, run the commands:

sudo subscription-manager list --available
sudo subscription-manager list --available --all

Update your system and reboot

sudo dnf -y update
sudo reboot

Install required dependencies

You need to install libvirt and NetworkManager packages which are the dependencies for running local OpenShift cluster.

### Fedora / RHEL 8+ ###
sudo dnf -y install wget vim NetworkManager

### RHEL 7 / CentOS 7 ###
sudo yum -y install wget vim NetworkManager

### Debian / Ubuntu ###
sudo apt update
sudo apt install wget vim libvirt-daemon-system qemu-kvm libvirt-daemon network-manager

2. Download Red Hat OpenShift Local

Next we download CRC portable executable. Visit Red Hat OpenShift downloads page to pull local cluster installer program.

crc install 01

Under Cluster select “Local” as option to create your cluster. You’ll see Download link and Pull secret download link as well.

crc install 02Ezoic

Here is the direct download link provided for reference purposes.

wget https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz

Extract the package downloaded

tar xvf crc-linux-amd64.tar.xz

Move the binary file to location in your PATH:

sudo mv crc-linux-*-amd64/crc /usr/local/bin
sudo rm -rf crc-linux-*-amd64/

Confirm installation was successful by checking software version.

$ crc version
CRC version: 2.38.0+25b6eb
OpenShift version: 4.15.17

Data collection can be enabled or disabled with the following commands:

#Enable
crc config set consent-telemetry yes

#Disable
crc config set consent-telemetry no

3. Run Local OpenShift Cluster in Linux Computer

Create standard user if root is the only account that you have:

useradd -m crc -s /bin/bash
passwd crc
usermod -aG wheel crc||usermod -aG sudo crc
echo "crc ALL=(ALL) NOPASSWD:ALL" | tee /etc/sudoers.d/crc

Login as user crc

su - crc

You’ll run the crc setup command to create a new Red Hat OpenShift Local Cluster. All the prerequisites for using CRC are handled automatically for you.

Ezoic
$ crc setup
CRC is constantly improving and we would like to know more about usage (more details at https://developers.redhat.com/article/tool-data-collection)
Your preference can be changed manually if desired using 'crc config set consent-telemetry <yes/no>'
Would you like to contribute anonymous usage statistics? [y/N]: y
Thanks for helping us! You can disable telemetry with the command 'crc config set consent-telemetry no'.
INFO Using bundle path /home/crc/.crc/cache/crc_libvirt_4.15.17_amd64.crcbundle
INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
INFO Caching crc-admin-helper executable
INFO Using root access: Changing ownership of /home/jkmutai/.crc/bin/crc-admin-helper-linux
INFO Using root access: Setting suid for /home/jkmutai/.crc/bin/crc-admin-helper-linux
INFO Checking for obsolete admin-helper executable
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
INFO Checking if crc executable symlink exists
INFO Creating symlink for crc executable
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Installing libvirt service and dependencies
INFO Using root access: Installing virtualization packages
INFO Checking if user is part of libvirt group
INFO Adding user to libvirt group
INFO Using root access: Adding user to the libvirt group
INFO Checking if active user/process is currently part of the libvirt group
INFO Checking if libvirt daemon is running
WARN No active (running) libvirtd systemd unit could be found - make sure one of libvirt systemd units is enabled so that it's autostarted at boot time.
INFO Starting libvirt service
INFO Using root access: Executing systemctl daemon-reload command
INFO Using root access: Executing systemctl start libvirtd
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Installing crc-driver-libvirt
INFO Checking crc daemon systemd service
INFO Setting up crc daemon systemd service
INFO Checking crc daemon systemd socket units
INFO Setting up crc daemon systemd socket units
INFO Checking if systemd-networkd is running
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Writing Network Manager config for crc
INFO Using root access: Writing NetworkManager configuration to /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf
INFO Using root access: Changing permissions for /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf to 644
INFO Using root access: Executing systemctl daemon-reload command
INFO Using root access: Executing systemctl reload NetworkManager
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
INFO Writing dnsmasq config for crc
INFO Using root access: Writing NetworkManager configuration to /etc/NetworkManager/dnsmasq.d/crc.conf
INFO Using root access: Changing permissions for /etc/NetworkManager/dnsmasq.d/crc.conf to 644
INFO Using root access: Executing systemctl daemon-reload command
INFO Using root access: Executing systemctl reload NetworkManager
INFO Checking if libvirt 'crc' network is available
INFO Setting up libvirt 'crc' network
INFO Checking if libvirt 'crc' network is active
INFO Starting libvirt 'crc' network
INFO Checking if CRC bundle is extracted in '$HOME/.crc'
INFO Checking if /home/jkmutai/.crc/cache/crc_libvirt_4.15.17_amd64.crcbundle exists
INFO Getting bundle for the CRC executable
INFO Downloading crc_libvirt_4.15.17_amd64.crcbundle

CRC bundle is downloaded locally within few seconds / minutes depending on your network connectivity speed.

INFO Downloading crc_libvirt_4.15.17_amd64.crcbundle
1.00 GiB / 4.00 GiB [----------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00% 85.19 MiB p/s
INFO Uncompressing /home/jkmutai/.crc/cache/crc_libvirt_4.15.17_amd64.crcbundle
crc.qcow2: 12.48 GiB / 12.48 GiB [-----------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00%
oc: 118.13 MiB / 118.13 MiB [----------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00%

Once the system is correctly setup for using CRC, start the new Red Hat OpenShift Local instance:

$ crc start
INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
INFO Checking if crc executable symlink exists
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if active user/process is currently part of the libvirt group
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking crc daemon systemd socket units
INFO Checking if systemd-networkd is running
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Loading bundle: crc_libvirt_4.15.17_amd64...
CRC requires a pull secret to download content from Red Hat.
You can copy it from the Pull Secret section of https://console.redhat.com/openshift/create/local.

Paste the contents of the Pull secret.

? Please enter the pull secret <PASTE-PULL-SECRET-FROM-REDHAT-PORTAL>

This can be obtained from Red Hat OpenShift Portal.

crc install 03

Local OpenShift cluster creation process should continue.

INFO Creating CRC VM for openshift 4.15.17...
INFO Generating new SSH key pair...
INFO Generating new password for the kubeadmin user
INFO Starting CRC VM for openshift 4.15.17...
INFO CRC instance is running with IP 192.168.130.11
INFO CRC VM is running
INFO Updating authorized keys...
INFO Configuring shared directories
INFO Check internal and public DNS query...
INFO Check DNS query from host...
INFO Verifying validity of the kubelet certificates...
INFO Starting kubelet service
INFO Waiting for kube-apiserver availability... [takes around 2min]
INFO Adding user's pull secret to the cluster...
INFO Updating SSH key to machine config resource...
INFO Waiting for user's pull secret part of instance disk...
INFO Changing the password for the kubeadmin user
INFO Updating cluster ID...
INFO Updating root CA cert to admin-kubeconfig-client-ca configmap...
INFO Starting openshift instance... [waiting for the cluster to stabilize]
INFO 3 operators are progressing: image-registry, network, openshift-controller-manager
[INFO 3 operators are progressing: image-registry, network, openshift-controller-manager
INFO 2 operators are progressing: image-registry, openshift-controller-manager
INFO Operator openshift-controller-manager is progressing
INFO Operator authentication is not yet available
INFO Operator kube-apiserver is progressing
INFO All operators are available. Ensuring stability...
INFO Operators are stable (2/3)...
INFO Operators are stable (3/3)...
INFO Adding crc-admin and crc-developer contexts to kubeconfig...

If creation was successful you should get output like below in your console.

Started the OpenShift cluster.

The server is accessible via web console at:
  https://console-openshift-console.apps-crc.testing

Log in as administrator:
  Username: kubeadmin
  Password: yHhxX-fqAjW-8Zzw5-Eg2jg

Log in as user:
  Username: developer
  Password: developer

Use the 'oc' command line interface:
  $ eval $(crc oc-env)
  $ oc login -u developer https://api.crc.testing:6443

Virtual Machine created can be checked with virsh command:

$ sudo virsh list
 Id   Name   State
----------------------
 1    crc    running

4. Manage cluster using crc commands

Update number of vCPUs available to the instance:

crc config set cpus <number>

Configure the memory available to the instance:Ezoic

$ crc config set memory <number-in-mib>

Display status of the OpenShift cluster

## When running ###
$ crc status
CRC VM:          Running
OpenShift:       Running (v4.15.17)
Podman:
Disk Usage:      15.29GB of 32.74GB (Inside the CRC VM)
Cache Usage:     17.09GB
Cache Directory: /home/jkmutai/.crc/cache

## When Stopped ###
$ crc status
CRC VM:          Stopped
OpenShift:       Stopped (v4.15.17)
Podman:
Disk Usage:      0B of 0B (Inside the CRC VM)
Cache Usage:     17.09GB
Cache Directory: /home/jkmutai/.crc/cache

Get IP address of the running OpenShift cluster

$ crc ip
192.168.130.11

Open the OpenShift Web Console in the default browser

crc console

Accept SSL certificate warnings to access OpenShift dashboard.

Ezoic
crc install 04

Accept risk and continue

crc install 05

Authenticate with username and password given on screen after deployment of crc instance.

crc install 06

 The following command can also be used to view the password for the developer and kubeadmin users:

Ezoic
crc console --credentials
  1. x
    1. Now Playing

To stop the instance run the commands:

crc stop

If you want to permanently delete the instance, use:

Ezoic
crc delete

5. Configure oc environment

Let’s add oc executable our system’s PATH:

$ crc oc-env
export PATH="/home/jkmutai/.crc/bin/oc:$PATH"
# Run this command to configure your shell:
# eval $(crc oc-env)

$ vim ~/.bashrc
export PATH="/home/$USER/.crc/bin/oc:$PATH"
eval $(crc oc-env)

Logout and back in to validate it works.

$ exit

Check oc binary path after getting in to the system.

$ which oc
~/.crc/bin/oc/oc

$ oc get nodes
NAME                 STATUS   ROLES           AGE   VERSION
crc-9jm8r-master-0   Ready    master,worker   21d   v1.24.0+9546431

Confirm this works by checking installed cluster version

$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.15.17    True        False         20d     Cluster version is 4.15.17

To log in as the developer user:

crc console --credentials
oc login -u developer https://api.crc.testing:6443

To log in as the kubeadmin user and run the following command:

$ oc config use-context crc-admin
$ oc whoami
kubeadmin

To log in to the registry as that user with its token, run:

Ezoic
oc registry login --insecure=true

Listing available Cluster Operators.

$ oc get co
NAME                                       VERSION    AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.15.17    True        False         False      11m
config-operator                            4.15.17    True        False         False      21d
console                                    4.15.17    True        False         False      13m
dns                                        4.15.17    True        False         False      19m
etcd                                       4.15.17    True        False         False      21d
image-registry                             4.15.17    True        False         False      14m
ingress                                    4.15.17    True        False         False      21d
kube-apiserver                             4.15.17    True        False         False      21d
kube-controller-manager                    4.15.17    True        False         False      21d
kube-scheduler                             4.15.17    True        False         False      21d
machine-api                                4.15.17    True        False         False      21d
machine-approver                           4.15.17    True        False         False      21d
machine-config                             4.15.17    True        False         False      21d
marketplace                                4.15.17    True        False         False      21d
network                                    4.15.17    True        False         False      21d
node-tuning                                4.15.17    True        False         False      13m
openshift-apiserver                        4.15.17    True        False         False      11m
openshift-controller-manager               4.15.17    True        False         False      14m
openshift-samples                          4.15.17    True        False         False      21d
operator-lifecycle-manager                 4.15.17    True        False         False      21d
operator-lifecycle-manager-catalog         4.15.17    True        False         False      21d
operator-lifecycle-manager-packageserver   4.15.17    True        False         False      19m
service-ca                                 4.15.17    True        False         False      21d

Display information about the release:

oc adm release info

Note that the OpenShift Local reserves IP subnets for its internal use and they should not collide with your host network. These IP subnets are:

  • 10.217.0.0/22
  • 10.217.4.0/23
  • 192.168.126.0/24

If your local system is behind a proxy, then define proxy settings using environment variable. See examples below:

crc config set http-proxy http://proxy.example.com:<port>
crc config set https-proxy http://proxy.example.com:<port>
crc config set no-proxy <comma-separated-no-proxy-entries>

If Proxy server uses SSL, set CA certificate as below:

crc config set proxy-ca-file <path-to-custom-ca-file>

6. Connecting to a remote instance

If the deployment is on a remote server, install CRC and start the instance using process in steps 1-3. With the cluster up and running, install HAProxy package:

sudo dnf install haproxy /usr/sbin/semanage

Allow access to cluster in firewall:

Ezoic
sudo firewall-cmd --add-service={http,https,kube-apiserver} --permanent
sudo firewall-cmd --reload

If you have SELinux enforcing, allow HAProxy to listen on TCP port 6443 for serving kube-apiserver on this port:

Ezoic
sudo semanage port -a -t http_port_t -p tcp 6443

Backup the current haproxy configuration file:

Ezoic
sudo cp /etc/haproxy/haproxy.cfg{,.bak}

Save the current IP address of CRC in variable:

Ezoic
export CRC_IP=$(crc ip)

Create a new configuration:

Ezoic
sudo tee /etc/haproxy/haproxy.cfg<<EOF
global
    log /dev/log local0

defaults
    balance roundrobin
    log global
    maxconn 100
    mode tcp
    timeout connect 5s
    timeout client 500s
    timeout server 500s

listen apps
    bind 0.0.0.0:80
    server crc_instance $CRC_IP:80 check

listen apps_ssl
    bind 0.0.0.0:443
    server crc_instance $CRC_IP:443 check

listen api
    bind 0.0.0.0:6443
    server crc_instance $CRC_IP:6443 check
EOF

Start and enable HAProxy service:

sudo systemctl enable --now haproxy

Confirm service status:

$ systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
     Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
     Active: active (running) since Fri 2024-07-15 02:39:50 EAT; 5s ago
    Process: 4679 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $OPTIONS (code=exited, status=0/SUCCESS)
   Main PID: 4681 (haproxy)
      Tasks: 9 (limit: 203397)
     Memory: 71.0M
        CPU: 100ms
     CGroup: /system.slice/haproxy.service
             ├─4681 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
             └─4683 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid

Sep 02 02:39:50 crc.mylab.io systemd[1]: Starting HAProxy Load Balancer...
Sep 02 02:39:50 crc.mylab.io haproxy[4681]: [NOTICE]   (4681) : New worker #1 (4683) forked
Sep 02 02:39:50 crc.mylab.io systemd[1]: Started HAProxy Load Balancer.

Check listening ports:

Ezoic
$ ss -tunelp | egrep '80|443|6443'
tcp   LISTEN 0      100           0.0.0.0:6443       0.0.0.0:*    ino:46027 sk:e cgroup:/system.slice/haproxy.service <->
tcp   LISTEN 0      100           0.0.0.0:80         0.0.0.0:*    ino:46025 sk:11 cgroup:/system.slice/haproxy.service <->
tcp   LISTEN 0      100           0.0.0.0:443        0.0.0.0:*    ino:46026 sk:17 cgroup:/system.slice/haproxy.service <->

Connect from client system (RHEL example)

The pre-requisites for this are:

  • A remote server running Local OpenShift Cluster for the client to connect to
  • External IP address of the remote server
  • Installed latest OpenShift CLI (oc) in your $PATH on the client.

You can use dnsmasq to connect a client machine to a remote server where OpenShift Container Platform cluster is running. This process assumes you’re using RHEL based system as client.

Install dnsmasq package:

sudo dnf install dnsmasq

Configure NetworkManager to use of dnsmasq for DNS resolution:

sudo tee /etc/NetworkManager/conf.d/use-dnsmasq.conf<<EOF
[main]
dns=dnsmasq
EOF

Add remote OpenShift Local Cluster DNS entries to the dnsmasq configuration:

Ezoic
$ sudo vim /etc/NetworkManager/dnsmasq.d/external-crc.conf
address=/apps-crc.testing/REMOTE_SERVER_IP_ADDRESS
address=/api.crc.testing/REMOTE_SERVER_IP_ADDRESS

If at one point you had local OpenShift client in your machine, then comment out any existing entries in /etc/NetworkManager/dnsmasq.d/crc.conf. These entries will conflict with the entries for the remote cluster.

Reload NetworkManager after making the changes:

Ezoic
sudo systemctl reload NetworkManager

We can then test by logging in to the remote cluster as the developer user with oc:

oc login -u developer -p developer https://api.crc.testing:6443

Now access the remote OpenShift Container Platform web console at https://console-openshift-console.apps-crc.testing URL. See other guides that we’ve written on OpenShift:

Ezoic