OpenShift Shortcuts

By Kashif Islam and Syed Hassan
  • OCP - General
    • Deleting All resources of a Type
    • Get Events Sorted By Time
    • Do something or delete a bunch of resources/pods
    • OpenShift Node information summary
    • Install Plan View and Approve
    • OC commands using REST API
    • Finding Node's Disk IDs
  • OCP - Networking
    • Network Interfacess in nodes
  • OCP - Catalog, CRD, & Operators
    • Merge CatalogSources
    • Available operators, default channel, and catalog soruce
    • CRD and Who provides it
    • CSV installed and their status
  • GitOps ZTP - Argo
    • ArgoCD Plugin
    • ArgoCD Route
    • ArgoCD Secret
    • ArgoCD Apps
  • GitOps ZTP - Policies
    • PGT and Policies : Find out Wave
    • PGT and Policies : Status of all Policies
  • GitOps ZTP - CGU
    • Get Status of all CGUs
  • GitOps ZTP - Post Install
    • Extract Kubeconfig
  • GitOps ZTP - Debugging
    • Check AgentClusterInstall Status
    • OC commands within the cluster
    • Cluster Readiness Tests
  • Interacting with Image Registry
    • Listing contents of registry
    • Finding tag of a registry image
    • Finding details about a registry image
    • Listing all operators in registry
    • Listing all OCP versions in registry
    • Finding Available Channels for an operator
    • Copying a container image to registry
  • Useful Shell Snippets
    • Comparing Files Across Directories

    OCP - General

    Deleting All resources of a Type

    
    export namespace="asdf"
    oc delete csr -n $namespace $(oc get csr | awk '{ print $1 }')
    

    Get Events Sorted By Time

    
    oc get events --sort-by='.lastTimestamp'
    

    Do something or delete a bunch of resources/pods

    
    oc get pods -n openshift-machine-api --no-headers | awk '{print $1}' | xargs oc get pods -n openshift-machine-api 
    

    OpenShift Node information summary

    Summarize key information about nodes. This example also serves as an example of customized outputs

    
    oc get nodes -o=custom-columns='NAME:.metadata.name,ROLE:TOBEDEFINED,CPU:.status.capacity.cpu,MEM:.status.capacity.memory,IP:.status.addresses[?(@.type=="InternalIP")].address'
    

    Output will look something like:

    
    NAME                     ROLE     CPU   MEM           IP
    ibu5-sno.npss.bos2.lab      64    131575724Ki   172.31.61.40
    

    Install Plan View and Approve

    To view all Install Plan that are pending apprvoal

    
    oc get ip -n my-project -o=jsonpath='{.items[?(@.spec.approved==false)].metadata.name}'
    

    To view all Install Plan that are pending apprvoal

    
    oc patch installplan $(oc get ip -n my-project-o=jsonpath='{.items[?(@.spec.approved==false)].metadata.name}') -n my-project --type merge --patch '{"spec":{"approved":true}}'
    

    View cluster capabilities:

    
    oc get clusterversion version -o jsonpath='{.status.capabilities}' | jq
    

    OC commands using REST API

    If you want to make API calls to an OCP cluster you can use the following snippet to set up the environment:

    
    oc create serviceaccount robot
    oc adm policy add-cluster-role-to-user cluster-admin -z robot
    TOKEN=$(oc create token robot)
    

    After this, you can use "curl" to make API calls. For example:

    curl -X GET -H "Authorization: Bearer $TOKEN" $(oc whoami --show-server)/apis/v1/namespaces

    To find the API for any "oc" command, run the command with "--v=6", and look for the GET command in the output. For example:

    oc get policies --v=6 | grep GET

    Some examples of REST API calls:

    
    curl -X GET -H "Authorization: Bearer $TOKEN" $(oc whoami --show-server)/apis/cluster.open-cluster-management.io/v1/managedclusters/cwl-site3/finalizers/
    curl -X GET -H "Authorization: Bearer $TOKEN" $(oc whoami --show-server)/api/v1/namespaces/keycloak/pods
    

    A good guide for ACM REST API can be found here. A more generic REST API guide can be found here

    Finding Node's Disk IDs

    To find Disk IDs (by path and by id), you can use the following snippet to look up all the nodes:

    
    for NODE in $(oc get nodes -ojsonpath='{.items[*].metadata.name}'); do oc debug node/$NODE -- chroot /host /bin/bash -c 'echo "##### list of disk ######"; lsblk; echo "######## Disk IDs ########"; ls -al /dev/disk/by-path | grep sd[abcdef]$ ; ls -al /dev/disk/by-id | grep sd[abcdef]$ ; '; done
    

    OCP - Network

    Network Interfacess in nodes

    The following snippet will display all the network interfaces in the cluster's nodes. (Note: requires nmstate operator to be installed)

    
    oc get nns -o jsonpath="{range .items[*]}{'\n===========\n'}{@.metadata.name}{'\n===========\n'}{range @.status.currentState.interfaces[*]}{.state}{'\t'}{.type}{'\t'}{.name}{'\t'}{.ip}{'\n'}{end}"
    

    OCP - Catalog & Operators

    Disable default CatalogSources

    To disable Default CatalogSources

    
    oc patch operatorhubs.config.openshift.io cluster --patch '{"spec":{"disableAllDefaultSources": true}}' --type=merge
    

    Viewing source/images for all CatalogSources

    
    oc get catalogsources.operators.coreos.com -A -o=custom-columns='NAME:.metadata.name,IMAGE:.spec.image'
    

    Output may look like the following:

    
    oc get catalogsources.operators.coreos.com -A -o=custom-columns='NAME:.metadata.name,IMAGE:.spec.image'
    NAME                            IMAGE
    cs-certified-operator-index     registry.bastion.npss.bos2.lab:9500/ocp/ocp4/redhat/certified-operator-index:v4.17
    cs-certified-operator-index-1   registry.bastion.npss.bos2.lab:9500/ocp/ocp4/redhat/certified-operator-index:v4.16
    cs-certified-operator-index-2   registry.bastion.npss.bos2.lab:9500/ocp/ocp4/redhat/certified-operator-index:v4.18
    cs-redhat-operator-index        registry.bastion.npss.bos2.lab:9500/ocp/ocp4/redhat/redhat-operator-index:v4.17
    cs-redhat-operator-index-1      registry.bastion.npss.bos2.lab:9500/ocp/ocp4/redhat/redhat-operator-index:v4.16
    cs-redhat-operator-index-2      registry.bastion.npss.bos2.lab:9500/ocp/ocp4/redhat/redhat-operator-index:v4.18
    

    Available operators, default channel, and catalog soruce

    
    oc get packagemanifests.packages.operators.coreos.com -o=custom-columns='NAME:.metadata.name,DEFAULT CSV:.status.defaultChannel,CATALOG:.status.catalogSource,CSV:.status.channels[*].currentCSV' | sort
    

    Merge CatalogSources

    
    for f in catalogSource-cs-* ; do echo "---" >> catalog-source.yaml; cat $f >> catalog-source.yaml; done
    

    Available operators, default channel, and catalog soruce

    
    oc get packagemanifests.packages.operators.coreos.com -o=custom-columns='NAME:.metadata.name,DEFAULT CSV:.status.defaultChannel,CATALOG:.status.catalogSource,CSV:.status.channels[*].currentCSV' | sort
    

    CRD and Who provides it

    To find out names for a CRD and who provides it:

    
    oc get crd  -o custom-columns='NAME:.status.acceptedNames.kind','API:.spec.group','VER:.spec.versions[*].name'
    

    output may look like:

    
    NAME                                API                                               VER
    AAQ                                 aaq.kubevirt.io                                   v1alpha1
    AcceleratorProfile                  dashboard.opendatahub.io                          v1,v1alpha
    Account                             nim.opendatahub.io                                v1
    ...
    

    An example to run this command against a specific CRD is:

    
    oc get crd imagebasedupgrades.lca.openshift.io  -o custom-columns='NAME:.status.acceptedNames.kind','API:.spec.group','VER:.spec.versions[*].name'
    NAME                API                VER
    ImageBasedUpgrade   lca.openshift.io   v1
    

    CSV installed and their status

    Find out which CSV installed:

    
    oc get csv -A  -o custom-columns="_NAME_:metadata.name,_STATUS_:status.phase" | sort | uniq
    

    Example output will be:

    
    _NAME_                                        _STATUS_
    oadp-operator.v1.4.3                          Succeeded
    lifecycle-agent.v4.16.2                       Succeeded
    cluster-logging.v6.1.3                        Succeeded
    packageserver                                 Succeeded
    sriov-network-operator.v4.16.0-202502262005   Succeeded
    lvms-operator.v4.16.8                         Succeeded
    

    A simpler version of this could be:

    
    oc get csv -A -o name |sort|uniq | cut -f2 -d/
    

    GitOps ZTP - Argo

    ArgoCD Plugin

    To find the ArgoCD Plug-in installed: (openshift-gitops is the default namespace used. Replace it with the exact namespace in case your ArgoCD has multiple instances)

    
    oc -n openshift-gitops get argocd openshift-gitops -o jsonpath='{.spec.repo.initContainers[0].image}{"\n"}'
    

    ArgoCD Route

    Define argo's namespace (openshift-gitops is the default) using the variable "argons", then:

    
    oc get route -n $argons -o jsonpath='{.items[].spec.host}'
    

    ArgoCD Secret

    Define argo's namespace (openshift-gitops is the default) using the variable "argons", then:

    
    oc get secrets -n $argons $argons-cluster -o jsonpath='{.data.admin\.password}' | base64 -d
    

    ArgoCD Apps

    Find out all the argo apps in the cluster and which Git they are sourcing from:

    
    oc get applications.argoproj.io -n openshift-gitops   -o json | jq '.items[] | .metadata.name, .spec.source'
    

    GitOps ZTP - Policies

    PGT and Policies : Find out Wave

    
    oc get policy -A  -o=custom-columns='NAMESPACE:.metadata.namespace,NAME:metadata.name,-1000_WAVE:metadata.annotations.ran\.openshift\.io\/ztp-deploy-wave,STAGE:spec.remediationAction,STATUS:status.compliant' | sort -k 3 -n
    
    
    

    PGT and Policies : Status of all Policies

    First define the variable "cluster"

    
    oc get policies -n $cluster -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.details[].history[0].message}{"\n======================\n"}{end}'
    

    Sample Output:

    
    ztp-core-policy.common-cwl-config	Compliant; notification - objectbucketclaims [loki-bucket-odf] found as specified in namespace openshift-logging; notification - clusterloggings [instance] found as specified in namespace openshift-logging; notification - clusterlogforwarders [instance] found as specified in namespace openshift-logging; notification - configmaps [user-workload-monitoring-config] found as specified in namespace openshift-user-workload-monitoring; notification - deployments [prometheus-kafka-adapter] found as specified in namespace prom-kafka-adapter; notification - services [prometheus-kafka-adapter] found as specified in namespace prom-kafka-adapter; notification - configmaps [cluster-monitoring-config] found as specified in namespace openshift-monitoring; notification - persistentvolumeclaims [pvc-audit-log] found as specified in namespace openshift-keda; notification - kedacontrollers [keda] found as specified in namespace openshift-keda
    ======================
    ztp-core-policy.common-cwl-discon	Compliant; notification - catalogsources [redhat-operators-disconnected] found as specified in namespace openshift-marketplace; notification - operatorhubs [cluster] found as specified
    ======================
    ztp-core-policy.common-cwl-loki-config	Compliant; notification - secrets [logging-loki-odf] found as specified in namespace openshift-logging; notification - lokistacks [logging-loki] found as specified in namespace openshift-logging
    ======================
    

    GitOps ZTP - CGU

    Get Status of all CGUs

    
    oc get cgu -n ztp-install -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.status.currentBatchRemediationProgress}{"\n"}{end}'
    

    GitOps ZTP - Post Install

    Extract Kubeconfig

    To extract the kubeconfig for a managed cluster:

    
    export CLUSTER_NAME=sno2
    oc get secret -n $CLUSTER_NAME $CLUSTER_NAME-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > ./$CLUSTER_NAME-kubeconfig
    

    GitOps ZTP - Debugging

    Check AgentClusterInstall Status

    
    export CLUSTER_NAME=sno2
    for i in {1..1000}; do curl -s -k $(oc -n $CLUSTER_NAME get agentclusterinstalls.extensions.hive.openshift.io $CLUSTER_NAME \
    -o=jsonpath="{.status.debugInfo.eventsURL}") | jq "." | egrep "message|event_time" | egrep -v "^--"; echo "------------^^^^ " $i; echo; sleep 5; done
    

    Another simpler output can be obtained using the following:

    
    export CLUSTER_NAME=sno2
    curl -k $(oc get -n $CLUSTER_NAME agentclusterinstall -o jsonpath='{.items[].status.debugInfo.eventsURL}') | jq '.' | grep message
    

    The output will look something like the following:

    
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 16714    0 16714    0     0  1632k      0 --:--:-- --:--:-- --:--:-- 1632k
        "message": "Successfully registered cluster",
        "message": "Custom install config was applied to the cluster",
        "message": "Updated image information (Image type is \"minimal-iso\", SSH public key is set)",
        "message": "Updated image information (Image type is \"minimal-iso\", SSH public key is set)",
        "message": "Host aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0301: Successfully registered",
        "message": "Cluster validation 'all-hosts-are-ready-to-install' that used to succeed is now failing",
        "message": "Cluster validation 'sufficient-masters-count' is now fixed",
        "message": "Host ocp-sno2: updated status from discovering to insufficient (Host cannot be installed due to following failing validation(s): Host couldn't synchronize with any NTP server)",
        "message": "Host sno2.5g-deployment.lab: validation 'ntp-synced' is now fixed",
        "message": "Host sno2.5g-deployment.lab: updated status from insufficient to known (Host is ready to be installed)",
        "message": "Cluster validation 'all-hosts-are-ready-to-install' is now fixed",
        "message": "Updated status of the cluster to ready",
        "message": "Updated status of the cluster to preparing-for-installation",
        "message": "Cluster starting to prepare for installation",
        "message": "Host sno2.5g-deployment.lab: updated status from known to preparing-for-installation (Host finished successfully to prepare for installation)",
        "message": "Host sno2.5g-deployment.lab: New image status infra.5g-deployment.lab:8443/openshift/release-images:4.14.0-x86_64. result: success. time: 5.17 seconds; size: 494.54 Megabytes; download rate: 100.33 MBps",
        "message": "Host sno2.5g-deployment.lab: New image status quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5173b37dbe7bcd28e41dea9b390d8c63bc4eb4d7ac060ccacd1a5a21f4d1293. result: success. time: 4.48 seconds; size: 802.25 Megabytes; download rate: 187.61 MBps",
        "message": "Host sno2.5g-deployment.lab: New image status quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:474d6aba2e2084a95732de1853e5c87b31aa101afb554798ebf97a960bebc293. result: success. time: 2.26 seconds; size: 603.11 Megabytes; download rate: 279.74 MBps",
        "message": "Host sno2.5g-deployment.lab: New image status registry.redhat.io/multicluster-engine/assisted-installer-rhel8@sha256:90f4e77350f1af8f24eabdfecd6c78cffdfbe6daba925c22593737242a697767. result: success. time: 3.65 seconds; size: 407.14 Megabytes; download rate: 116.82 MBps",
        "message": "Host sno2.5g-deployment.lab: updated status from preparing-for-installation to preparing-successful (Host finished successfully to prepare for installation)",
    
    

    OC commands within the cluster

    Sometimes you may want to run oc commands while loggred into CoreOS. The command needs a copy of kubeconfig to run. So the locally saved kubeconfig copy can be used as shown here: (the example uses "oc get pods", but any oc command can be used instead)

    
    oc get pods --kubeconfig=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig
    

    Cluster Readiness Tests

    Following snippet uses a set of checks to ensure that cluster is ready for use:

    
    error_count=0; RED='\033[31m'; GREEN='\033[32m'; RESET='\033[0m'; BLUE='\033[34m'; CYAN='\033[36m'
    echo -e "$RESET ########################################################"
    echo -e "$RESET ########################################################"
    echo -e "$CYAN ###############i######### Checking if all MCPs are ready #########################"
    for mcp in $(oc get mcp -o=name); do echo -e "$BLUE ### checking...."$mcp; echo -e "$RESET"; oc wait $mcp --for='jsonpath={.status.conditions[?(@.type=="Degraded")].status}=False' --timeout=3s; if [ $? != 0 ]; then echo -e "$RED ### ERROR ###" $mcp "... Still degraded"; ((error_count++)); fi; oc wait $mcp --for='jsonpath={.status.conditions[?(@.type=="Updating")].status}=False' --timeout=3s; if [ $? != 0 ]; then echo -e "$RED ### ERROR ###" $mcp"... Still Updating"; ((error_count++)); fi; oc wait $mcp  --for='jsonpath={.status.conditions[?(@.type=="Updated")].status}=True' --timeout=3s; if [ $? != 0 ]; then echo -e "$RED ### ERROR ###" $mcp"... Not Updated"; ((error_count++)); fi; done; 
    
    echo -e "$RESET ####### Error Count So far: " $error_count
    
    echo -e "$CYAN ################## Checking if all Cluster Operators are ready ###################"
    for co in $(oc get co -o=name); do echo -e "$BLUE ### checking...."$co; echo -e "$RESET"; oc wait $co --for='jsonpath={.status.conditions[?(@.type=="Available")].status}=True' --timeout=3s; if [ $? != 0 ]; then echo -e "$RED ### ERROR ###" $co "... Still Unavailable"; ((error_count++)); fi ; oc wait $co --for='jsonpath={.status.conditions[?(@.type=="Progressing")].status}=False' --timeout=3s; if [ $? != 0 ]; then echo -e "$RED ### ERROR ###" $mcp "... Still Progressing"; ((error_count++)); fi; oc wait $co --for='jsonpath={.status.conditions[?(@.type=="Degraded")].status}=False' --timeout=3s; if [ $? != 0 ]; then echo -e "$RED ### ERROR ###" $mcp "... Still Degraded"; ((error_count++)); fi; done
    
    echo -e "$RESET ####### Error Count So far: " $error_count
    
    echo -e "$CYAN ######################## Checking if all Nodes are ready #########################"
    for node in $(oc get nodes -o=name); do echo -e "$BLUE ### checking...."$node; echo -e "$RESET";  oc wait $node --for='jsonpath={.status.conditions[?(@.type=="Ready")].status}=True' --timeout=3s; if [ $? != 0 ]; then echo -e "$RED ### ERROR ###" $node "... Still Not Ready"; ((error_count++)); fi ; done
    echo -e "$RESET #########################################################"
    if [ $error_count != 0 ]; then echo -e "$RED ####### Total Error Count : " $error_count; else echo -e "$GREEN ####### Total Error Count : " $error_count; fi
    echo -e "$RESET #########################################################"
    

    Interacting with Image Registry

    Listing contents of registry

    Assuming that the registry is defined as: (don't forget the forward slash in the end)

    
    export REGISTRY="quay.tnc.bootcamp.lab:8443/"
    

    Full contents of registry to can be dumped using

    
    podman search --limit 9999 --tls-verify=false $REGISTRY
    

    Finding tag of a registry image

    Assuming that the registry is defined as: (don't forget the forward slash in the end)

    
    export REGISTRY="quay.tnc.bootcamp.lab:8443/"
    

    Lets define the keyword that you want to search for as below:

    
    export IMAGE="multiclusterhub"
    

    Then tags of all matching entries can be found using:

    
    
    podman search --limit 9999 --tls-verify=false $REGISTRY | grep $IMAGE | xargs -n1 podman search --limit 9999 --tls-verify=false --list-tags
    

    The output will look like the following:

    
    podman search --limit 9999 --tls-verify=false --list-tags $(podman search --limit 9999 --tls-verify=false $REGISTRY | grep $IMAGE)
    NAME                                                                TAG
    quay.tnc.bootcamp.lab:8443/multicluster-engine/mce-operator-bundle  57baa6e7
    

    If you would like the tags to be added to the name in the output, the following can be used:

    
    podman search --limit 9999 --tls-verify=false $REGISTRY | grep $IMAGE | xargs -n1 podman search --limit 9999 --tls-verify=false --list-tags --format json | jq -r '.[] | "\(.Name):\(.Tags[])"'
    

    In this case, the output will look like:

    
    podman search --limit 9999 --tls-verify=false $REGISTRY | grep $IMAGE | xargs -n1 podman search --limit 9999 --tls-verify=false --list-tags --format json | jq -r '.[] | "\(.Name):\(.Tags[])"'
    quay.tnc.bootcamp.lab:8443/multicluster-engine/mce-operator-bundle:57baa6e7
    

    Finding details about a registry image

    The command to extract the details of a copy of the image (hence need a tag and name) is:

    
    skopeo inspect --tls-verify=false docker://{ImageName}:{Tag}d
    

    The above command to extract tag for an image can be combined with the skopeo command like this:

    Assuming that the registry, and the image you seek is defined as: (don't forget the forward slash in the end)

    
    export REGISTRY="quay.tnc.bootcamp.lab:8443/"
    export IMAGE="multiclusterhub"
    
    
    for ImageNameAndTag in $(podman search --limit 9999 --tls-verify=false --list-tags $(podman search --limit 9999 --tls-verify=false $REGISTRY/ | grep $IMAGE) --format json | jq -r '.[] | "\(.Name):\(.Tags[])"'); do echo "======================"; skopeo inspect --tls-verify=false docker://$ImageNameAndTag ; done 
    

    Listing Sha signatures of all occurances of an image

    Assuming that the registry is defined as: (don't forget the forward slash in the end)

    
    export REGISTRY="quay.tnc.bootcamp.lab:8443/"
    export IMAGE="multicluster-operators-subscription-rhel9"
    
    
    for ImageNameAndTag in $(podman search --limit 9999 --tls-verify=false $REGISTRY | grep $IMAGE | xargs -n1 podman search --limit 9999 --tls-verify=false --list-tags --format json | jq -r '.[] | "\(.Name):\(.Tags[])"'); do echo "======================"; skopeo inspect --tls-verify=false docker://$ImageNameAndTag | jq .Digest; done
    

    Example output:

    
    ======================
    "sha256:8151a89ebfebf2e6d336849fdd329b123b7f486938808ee80bf6bd8d44f6985c"
    ======================
    "sha256:b6de38d2af6781585cec6365d7a3aa74884ad597f750ab93d53eb31104bd2e76"
    ======================
    "sha256:c65aa18f94facf6623594c0a97ef99ee1c9c8ff16c6a972e06485a6c52572bb6"
    ======================
    "sha256:addd0597c6846fdd0d62ad1956b4d22298923f3ad174514a5b45df4bf302b2f0"
    

    Listing all operators in registry

    Assuming that the registry is defined as: (don't forget the forward slash in the end)

    
    export REGISTRY="quay.tnc.bootcamp.lab:8443/"
    

    Though its not perfect, but the following command is usually a good way to pull up all the operators mirrored in a registry:

    
    podman search --limit 9999 --tls-verify=false $REGISTRY | grep operator-bundle
    

    Listing all OCP versions in registry

    Assuming that the registry is defined as: (don't forget the forward slash in the end)

    
    export REGISTRY="quay.tnc.bootcamp.lab:8443/"
    

    The mirrored OCP versions can be viewed using:

    
    for ReleaseImages in $(podman search --limit 9999 --tls-verify=false $REGISTRY | grep release-images); do podman search --limit 9999 --list-tags --tls-verify=false $ReleaseImages; done
    

    Finding Available Channels for an operator

    To find the channels available for an operator, first define the REGISTY variable, as well as the name of operator in IMAGE variable:

    
    export REGISTRY="quay.tnc.bootcamp.lab:8443"
    export IMAGE="cluster-logging-operator"
    

    Then use the following command to extract the certificate:

    
    for ImageNameAndTag in $(podman search --limit 9999 --tls-verify=false --list-tags $(podman search --limit 9999 --tls
    -verify=false $REGISTRY | grep $IMAGE) --format json | jq -r '.[] | "\(.Name):\(.Tags[])"'); do echo "======================"; skopeo inspect -
    -tls-verify=false docker://$ImageNameAndTag | jq '.Labels."operators.operatorframework.io.bundle.channel.default.v1"' ; done 
    

    Example output:

    
    ======================
    "stable-6.1"
    ======================
    "stable-6.3"
    ======================
    "stable-6.2"
    ======================
    "stable-5.9"
    

    Extracting Registry Certificate

    To extract the certificate from a registry (needed by installer and other OpenShift components to interact with it), first set the variable as shown here (note: there isn't any forward slash in the end)

    
    export REGISTRY="quay.tnc.bootcamp.lab:8443"
    

    Then use the following command to extract the certificate:

    
    openssl s_client -showcerts -connect $REGISTRY < /dev/null | sed -n '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p' > quay-cert.pem
    

    Side note: in case of OCP, the cert is usually found in config map called "user-ca-bundle", under "openshift-config" namespace, with the key "ca-bundle.crt"

    Copying a container image to registry

    Assuming that the registry is defined as: (don't forget the forward slash in the end)

    
    export REGISTRY="quay.tnc.bootcamp.lab:8443/"
    

    Lets say you want to copy the image "quay.io/sfhassan/oc_net_tools" to this registry, the command will need to be: (a tag can optionally be added. If omitted then it will be assumed to be "latest")

    
    skopeo copy --dest-tls-verify=false docker://quay.io/sfhassan/oc_net_tools docker://quay.tnc.bootcamp.lab:8443/oc_net_tools
    

    The templatized version of this command will be:

    
    skopeo copy --dest-tls-verify=false docker://quay.io/sfhassan/oc_net_tools  docker://$(echo $REGISTRY)oc_net_tools
    

    Useful Shell Snippets

    Comparing Files Across Directories

    To compare all (text) files in a directory to another version of these files in another directory, this snippet can be useful:
    (assuming that the new location of these files which we are comparing with is called "new_location". Change the command accordingly)

    
    for old_file in $(find . -type f); do cmp $old_file ~/new_locaton/$old_file -s ;  result=$?; if [ $result != 0 ]; then echo $result "..." $old_file ; fi; done