Installing and using Forklift (2.0)
About Forklift
Forklift enables you to migrate virtual machines from VMware 6.5, and later, to KubeVirt.
KubeVirt is an add-on to OKD 4.7 that allows you to run and manage virtual machine workloads alongside container workloads.
Custom resources and services
Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.
-
Provider
CR: Stores attributes that enable Forklift to connect to and interact with the source and target providers. -
NetworkMapping
CR: Maps the networks of the source and target providers. -
StorageMapping
CR: Maps the storage of the source and target providers. -
Provisioner
CR: Stores the configuration of the storage provisioners, such as supported volume and access modes. -
Plan
CR: Contains a list of VMs that are migrated together with the same migration parameters and associated network and storage mappings. -
Migration
CR: Runs a migration plan.Only one
Migration
CR per migration plan can run at a given time. You can create multipleMigration
CRs for a singlePlan
CR.
-
Provider Inventory service:
-
Connects to the source and target providers
-
Maintains a local inventory for mappings and plans
-
Stores VM configurations
-
-
User Interface service:
-
Enables you to create and configure Forklift CRs.
-
Displays the status of the CRs and the progress of a migration
-
-
Controller services:
-
Perform actions with CRs in response to user requests.
-
The Migration Controller service orchestrates migrations.
When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is
Not ready
and the plan cannot be used to perform a migration. If the plan passes validation, the plan status isReady
and it can be used to perform a migration. After a successful migration, the Migration Controller changes the plan status toCompleted
. -
The Virtual Machine Import Controller, Kubevirt Controller, and Containerized Data Import (CDI) Controller services handle most technical operations.
-
Forklift workflow
You can migrate virtual machines (VMs) using the Forklift console or the command line.
The following diagram is a high-level view of the migration workflow.
The Forklift workflow comprises the following steps:
-
You create a source provider, a target provider, a network mapping, and a storage mapping.
-
You create a migration plan that includes the following resources:
-
Source provider
-
Target provider
-
Network mapping
-
Storage mapping
-
One or more VMs
-
-
You run a migration plan by creating a
Migration
CR that references the migration plan. If a migration is incomplete, you can run a migration plan multiple times until all VMs are migrated. -
For each VM in the migration plan, the Migration Controller creates a
VirtualMachineImport
CR and monitors its status. When all VMs have been migrated, the Migration Controller sets the status of the migration plan toCompleted
. The power state of a source VM is maintained after migration.
KubeVirt migration workflow
The KubeVirt workflow provides a detailed view of the process of migrating virtual machines (VMs).
The KubeVirt workflow comprises the following steps:
-
When you run a migration plan, the Migration Controller creates a
VirtualMachineImport
CR for each source VM. -
The Virtual Machine Import Controller validates the
VirtualMachineImport
CR and generates aVirtualMachine
CR -
The Virtual Machine Import Controller retrieves the VM configuration, including network, storage, and metadata, linked in the
VirtualMachineImport
CR.For each VM disk:
-
The Virtual Machine Import Controller creates a
DataVolume
CR as a wrapper for a Persistent Volume Claim (PVC) and annotations. -
The Containerized Data Importer (CDI) Controller creates a PVC. The Persistent Volume (PV) is dynamically provisioned by the
StorageClass
provisioner. -
The CDI Controller creates an
Importer
pod. -
The
Importer
pod connects to the source VM disk, using the VDDK SDK, and streams the disk to the PV. -
After the VM disks are transferred, the Virtual Machine Import Controller creates a
Conversion
pod with the PVCs attached to it. TheConversion
pod runsvirt-v2v
, which installs and configures device drivers on the PVCs of the target VM. -
The Virtual Machine Import Controller creates a
VirtualMachineInstance
CR. -
When the target VM is powered on, the KubeVirt Controller creates a VM pod. The VM pod runs
QEMU-KVM
with the PVCs attached as VM disks.
Storage support and default modes
Forklift supports KubeVirt storage features.
If the KubeVirt storage does not support dynamic provisioning, Forklift applies the default settings, |
Forklift uses the following default volume and access modes.
Provisioner | Volume mode | Access mode |
---|---|---|
kubernetes.io/aws-ebs |
Block |
ReadWriteOnce |
kubernetes.io/azure-disk |
Block |
ReadWriteOnce |
kubernetes.io/azure-file |
Filesystem |
ReadWriteMany |
kubernetes.io/cinder |
Block |
ReadWriteOnce |
kubernetes.io/gce-pd |
Block |
ReadWriteOnce |
kubernetes.io/hostpath-provisioner |
Filesystem |
ReadWriteOnce |
manila.csi.openstack.org |
Filesystem |
ReadWriteMany |
openshift-storage.cephfs.csi.ceph.com |
Filesystem |
ReadWriteMany |
openshift-storage.rbd.csi.ceph.com |
Block |
ReadWriteOnce |
kubernetes.io/rbd |
Block |
ReadWriteOnce |
kubernetes.io/vsphere-volume |
Block |
ReadWriteOnce |
Installing Forklift
You can install Forklift by using the OKD web console or the command line interface (CLI).
After you have installed Forklift, you must create a VMware Virtual Disk Development Kit (VDDK) image and add it to a config map. |
If you are performing more than 10 concurrent VM migrations from a single ESXi host, you must increase the NFC service memory of the host to enable additional connections for migrations. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.
Installing Forklift by using the OKD web console
You can install Forklift by using the OKD web console.
-
VMware vSphere 6.5 or later.
-
OKD 4.7 installed.
-
KubeVirt Operator installed.
-
You must be logged in as a user with
cluster-admin
permissions. -
If you are performing more than 10 concurrent migrations from a single ESXi host, you must increase the NFC service memory of the host.
-
VMware virtual machines:
-
VMware Tools installed.
-
ISO/CDROM disks unmounted.
-
NIC with no more than one IPv4 and/or one IPv6 address.
-
VM name containing only lowercase letters (
a-z
), numbers (0-9
), or hyphens (-
), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.
), or special characters. -
VM name that does not duplicate the name of a virtual machine in the KubeVirt environment.
-
Operating system certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with
virt-v2v
.
-
-
Network:
-
IP addresses, VLANs, and other network configuration settings must not be changed before or after migration. The MAC addresses of the virtual machines are preserved during migration.
-
Uninterrupted and reliable network connections between the clusters and the replication repository.
-
Network ports enabled in the firewall rules.
-
Port | Protocol | Source | Destination | Purpose |
---|---|---|---|---|
443 |
TCP |
OpenShift nodes |
VMware vCenter |
VMware provider inventory Disk transfer authentication |
443 |
TCP |
OpenShift nodes |
VMware ESXi hosts |
Disk transfer authentication |
902 |
TCP |
OpenShift nodes |
VMware ESXi hosts |
Disk transfer data copy |
-
In the OKD web console, navigate to Operators → OperatorHub.
-
Use the Filter by keyword field to search for forklift-operator.
The Forklift Operator is a Community Operator. Red Hat does not support Community Operators.
-
Click the Forklift Operator and then click Install.
-
On the Install Operator page, click Install.
-
Click Operators → Installed Operators to verify that the Forklift Operator appears in the konveyor-forklift project with the status Succeeded.
-
Click the Forklift Operator.
-
Under Provided APIs, locate the ForkliftController, and click Create Instance.
-
Click Create.
-
Click Workloads → Pods to verify that the Forklift pods are running.
Installing Forklift from the CLI
You can install Forklift from the command line interface (CLI).
-
VMware vSphere 6.5 or later.
-
OKD 4.7 installed.
-
KubeVirt Operator installed.
-
You must be logged in as a user with
cluster-admin
permissions. -
If you are performing more than 10 concurrent migrations from a single ESXi host, you must increase the NFC service memory of the host.
-
VMware virtual machines:
-
VMware Tools installed.
-
ISO/CDROM disks unmounted.
-
NIC with no more than one IPv4 and/or one IPv6 address.
-
VM name containing only lowercase letters (
a-z
), numbers (0-9
), or hyphens (-
), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.
), or special characters. -
VM name that does not duplicate the name of a virtual machine in the KubeVirt environment.
-
Operating system certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with
virt-v2v
.
-
-
Network:
-
IP addresses, VLANs, and other network configuration settings must not be changed before or after migration. The MAC addresses of the virtual machines are preserved during migration.
-
Uninterrupted and reliable network connections between the clusters and the replication repository.
-
Network ports enabled in the firewall rules.
-
Port | Protocol | Source | Destination | Purpose |
---|---|---|---|---|
443 |
TCP |
OpenShift nodes |
VMware vCenter |
VMware provider inventory Disk transfer authentication |
443 |
TCP |
OpenShift nodes |
VMware ESXi hosts |
Disk transfer authentication |
902 |
TCP |
OpenShift nodes |
VMware ESXi hosts |
Disk transfer data copy |
-
Create the konveyor-forklift project:
$ cat << EOF | oc apply -f - apiVersion: project.openshift.io/v1 kind: Project metadata: name: konveyor-forklift EOF
-
Create an
OperatorGroup
CR calledmigration
:$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: migration namespace: konveyor-forklift spec: targetNamespaces: - konveyor-forklift EOF
-
Create a
Subscription
CR for the Operator:$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: forklift-operator namespace: konveyor-forklift spec: channel: development installPlanApproval: Automatic name: forklift-operator source: community-operators sourceNamespace: openshift-marketplace startingCSV: "konveyor-forklift-operator.v2.0.0" EOF
-
Create a
ForkliftController
CR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: ForkliftController metadata: name: forklift-controller namespace: konveyor-forklift spec: olm_managed: true EOF
-
Verify that the Forklift pods are running:
$ oc get pods -n konveyor-forklift
Example outputNAME READY STATUS RESTARTS AGE forklift-controller-788bdb4c69-mw268 2/2 Running 0 2m forklift-operator-6bf45b8d8-qps9v 1/1 Running 0 5m forklift-ui-7cdf96d8f6-xnw5n 1/1 Running 0 2m
Creating a VDDK image
Forklift uses the VMware Virtual Disk Development Kit (VDDK) SDK to transfer virtual disks.
You must download VDDK, build a VDDK image, push it to a private image registry, and then add the image to the v2v-vmware
config map.
Storing the VDDK image in a public registry might violate the VMware license terms. |
-
OKD image registry or a secure external registry.
-
Podman installed.
-
If you are using an external registry, KubeVirt must have access.
-
Create and navigate to a temporary directory:
$ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
-
In a browser, navigate to the VMware code site and click SDKs.
-
In the Hyperconverged Infrastructure section, click Virtual Disk Development Kit (VDDK).
-
Select the latest VDDK version.
-
Click Download and save the VDDK archive file in the temporary directory.
-
Extract the VDDK archive:
$ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
-
Create a
Dockerfile
:$ cat > Dockerfile <<EOF FROM registry.access.redhat.com/ubi8/ubi-minimal COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib RUN mkdir -p /opt ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"] EOF
-
Build the VDDK image:
$ podman build . -t <image> (1)
1 Specify the image, for example, image-registry.openshift-image-registry.svc:5000/openshift/vddk:latest
orregistry.example.com:5000/vddk:latest
. -
Push the VDDK image to the registry:
$ podman push <image>
-
Patch the
v2v-vmware
config map to add thevddk-init-image
field:$ oc patch configmap/v2v-vmware -n openshift-cnv \ -p '{"data": {"vddk-init-image": "<image>"}}'
Increasing the NFC service memory of an ESXi host
If you are performing more than 10 concurrent migrations from a single ESXi host, you must increase the NFC service memory of the host to enable additional connections for migrations. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.
-
Log in to the ESXi host as root.
-
Change the value of
maxMemory
to1000000000
in/etc/vmware/hostd/config.xml
:... <nfcsvc> <path>libnfcsvc.so</path> <enabled>true</enabled> <maxMemory>1000000000</maxMemory> <maxStreamMemory>10485760</maxStreamMemory> </nfcsvc> ...
-
Restart
hostd
:# /etc/init.d/hostd restart
You do not need to reboot the host.
Migrating virtual machines to KubeVirt
You can migrate virtual machines to KubeVirt by using the Forklift web console or the command line interface (CLI).
Migrating virtual machines by using the Forklift web console
You can migrate virtual machines to KubeVirt by using the Forklift web console.
You add the source and target providers and create a network mapping and a storage mapping. Then, you create and run a migration plan.
Adding the KubeVirt provider
You can add the KubeVirt provider by using the Forklift web console.
-
Service account token with
cluster-admin
privileges.
-
In the web console, navigate to Providers and click Add provider.
-
Select KubeVirt from the Type list.
-
Fill in the following fields:
-
Name: OKD cluster name to display in the list of providers
-
URL: OKD cluster API endpoint
-
Service account token:
cluster-admin
service account token
-
-
Click Check connection to verify the credentials.
-
Click Add.
The provider appears in the list of providers.
Adding a source provider
You can add VMware as a source provider by using the Forklift web console.
-
You must have a VMware vCenter user account with administrator privileges.
-
In the Forklift web console, click Providers.
-
Click Add provider.
-
Select VMware from the Type list.
-
Fill in the following fields:
-
Name: Name to display in the list of providers
-
Hostname or IP address: vCenter host name or IP address
-
Username: vCenter admin user name, for example,
administrator@vsphere.local
-
Password: vCenter admin password
-
SHA-1 fingerprint: vCenter SHA-1 fingerprint
To obtain the vCenter SHA-1 fingerprint, enter the following command:
$ openssl s_client \ -connect <vcenter.example.com>:443 < /dev/null 2>/dev/null \ (1) | openssl x509 -fingerprint -noout -in /dev/stdin \ | cut -d '=' -f 2
1 Specify the vCenter host name.
-
-
Click Add to add and save the provider.
The VMware provider appears in the list of providers.
Selecting a migration network for a source provider
Select a migration network to reduce risk to the VMware environment and to improve migration performance.
By default, Forklift selects the management network of a source provider. However, using the management network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the VMware platform because the disk transfer operation might saturate the network and impede communication between vCenter and the ESXi hosts.
-
The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.
-
The migration network must be accessible to the KubeVirt nodes through the default gateway.
The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.
-
The migration network must have jumbo frames enabled.
-
You must have administrator privileges for each ESXi host.
-
In the web console, navigate to Providers and click VMware.
-
Click the host number in the Hosts column beside a VMware provider to view a list of hosts.
-
Select one or more hosts and click Select migration network.
-
Complete the following fields:
-
Network: Select the migration network.
You can clear the migration network selection by selecting the default management network.
-
ESXi host admin username: Specify the ESXi host admin user name, for example,
root
. -
ESXi host admin password: Specify the ESXi host password.
-
-
Click Save.
-
Verify that the status of each host is Ready.
If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.
The migration network is displayed in the list of ESXi hosts.
Creating a network mapping
You can create a network mapping by using the Forklift web console to map source networks to KubeVirt networks.
You cannot map an opaque network, typically managed by NSX, to a KubeVirt network. |
-
Source and target providers added to the web console.
-
In the web console, navigate to Mappings → Network.
-
Click Create mapping.
-
Complete the following fields:
-
Name: Enter a name to display in the network mappings list.
-
Source provider: Select a source provider.
-
Target provider: Select a target provider.
-
Source networks: Select a source network.
-
Target namespaces/networks: Select a target network.
-
-
Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.
-
Click Create.
The network mapping is displayed on the Network mappings screen.
Creating a storage mapping
You can create a storage mapping to map source data stores to KubeVirt storage classes.
-
Source and target providers added to the web console.
-
Local and shared persistent storage that support VM migration.
-
In the web console, navigate to Mappings → Storage.
-
Click Create mapping.
-
Complete the following fields:
-
Name: Enter a name to display in the storage mappings list.
-
Source provider: Select a source provider.
-
Target provider: Select a target provider.
-
Source datastores: Select a source data store.
-
Target storage classes: Select a target storage class.
-
-
Optional: Click Add to create additional storage mappings or to map multiple data stores to a single storage class.
-
Click Create.
The mapping is displayed on the Storage mappings screen.
Creating a migration plan
You can create a migration plan by using the Forklift web console.
A migration plan allows you to group virtual machines that should be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.
-
VDDK image added to the
v2v-vmware
config map. -
Source and target providers added to the web console.
-
In the web console, navigate to Migration plans and click Create migration plan.
-
Complete the following fields:
-
Plan name: Enter a migration plan name to display in the migration plan list.
-
Plan description: Optional: Brief description of the migration plan.
-
Source provider: Select a source provider.
-
Target provider: Select a target provider.
-
Target namespace: You can type to search for an existing target namespace or create a new namespace.
-
-
Click Next.
-
Click By clusters and hosts or By folders, select clusters, hosts, or folders to filter the VMs, and then click Next.
-
Select the VMs to migrate and click Next.
-
Select an existing network mapping or create a new network mapping.
-
Optional: If you select Create a new network mapping, select a target network for each source network.
-
Optional: Select Save mapping to use again and enter a network mapping name.
-
Click Next.
-
Select an existing storage mapping or create a new storage mapping.
-
Optional: If you select Create a new storage mapping, select a target storage class for each source data store.
-
Optional: Select Save mapping to use again and enter a storage mapping name.
-
Click Next.
-
Review your migration plan and click Finish.
The migration plan is saved in the migration plan list.
Running a migration plan
You can run a migration plan and view its progress in the web console.
-
In the web console, navigate to Migration plans.
The Migration plans list displays the source and target providers, the number of VMs being migrated, and the status of the plan.
-
Click Start beside a migration plan with a Ready status to run the plan.
-
To view the Migration Details by VM screen, click the name of a migration plan.
This screen displays the migration start and end time, the amount of data copied, and a progress pipeline for each VM being migrated.
-
Optional: Click a VM to view the migration steps, elapsed time of each step, and the state.
Migrating virtual machines from the CLI
You can migrate virtual machines from the command line (CLI) by creating the following custom resources (CRs):
-
Secret
CR: Contains the VMware provider credentials. -
Provider
CR: Describes the VMware provider. -
Plan
CR: Describes the source and target clusters, network mappings, data store mappings, and VMs to migrate. -
Migration
CR: Runs thePlan
CR.If a migration does not complete, you can create a new
Migration
CR, without changing thePlan
CR, to migrate the remaining VMs. You can associate multipleMigration
CRs with a singlePlan
CR.
-
KubeVirt Operator installed.
-
Forklift Operator installed.
-
OpenShift CLI installed.
-
VDDK image added to the
v2v-vmware
config map. -
You must be logged in as a user with
cluster-admin
privileges. -
You must have a VMware vCenter user account with administrator privileges.
-
Obtain the SHA-1 fingerprint of the vCenter host:
$ openssl s_client \ -connect <vcenter_host>:443 \ (1) < /dev/null 2>/dev/null \ | openssl x509 -fingerprint -noout -in /dev/stdin \ | cut -d '=' -f 2
1 Specify the vCenter host name. Example output01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
-
Create a
Secret
CR for the VMware provider:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: vmware-secret namespace: konveyor-forklift type: Opaque stringData: user: <user_name> (1) password: <password> (2) thumbprint: <fingerprint> (3) EOF
1 Specify the vCenter administrator account, for example, administrator@vsphere.local
.2 Specify the vCenter password. 3 Specify the SHA-1 fingerprint of the vCenter host. -
Create a
Provider
CR for the VMware provider:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: vmware-provider namespace: konveyor-forklift spec: type: vsphere url: <api_end_point> (1) secret: name: <vmware_secret> (2) namespace: konveyor-forklift EOF
1 Specify the vSphere API end point, for example, https://<vcenter.host.com>/sdk
.2 Specify the name of the VMware Secret
CR. -
Create a
Plan
CR for the migration:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan_name> (1) namespace: konveyor-forklift spec: provider: source: name: vmware-provider namespace: konveyor-forklift destination: name: destination-cluster namespace: konveyor-forklift map: networks: (2) - source: (3) id: <source_network_mor> (4) name: <source_network_name> destination: type: pod name: pod namespace: konveyor-forklift datastores: (5) - source: (6) id: <source_datastore_mor> (7) name: <source_datastore_name> destination: storageClass: standard vms: (8) - id: <source_vm_mor> (9) - name: <source_vm_name> EOF
1 Specify the name of the Plan
CR.2 You can create multiple network mappings for source and destination networks. 3 You can use either the id
or thename
parameter to specify the source network.4 Managed object reference of the source network. 5 You can create multiple storage mappings for source data stores and destination storage classes. 6 You can use either the id
or thename
parameter to specify the source data store.7 Managed object reference of the source data store. 8 You can use either the id
or thename
parameter to specify the source VM.9 Managed object reference of the source VM. -
Create a
Migration
CR to run thePlan
CR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration_name> (1) namespace: konveyor-forklift spec: plan: name: <plan_name> (2) namespace: konveyor-forklift EOF
1 Specify the name of the Migration
CR.2 Specify the name of the Plan
CR that you are running.The
Migration
CR creates aVirtualMachineImport
CR for each VM that is migrated. -
Monitor the progress of the migration by viewing the
VirtualMachineImport
pods:$ oc get pods -n konveyor-forklift
Uninstalling Forklift
You can uninstall Forklift by using the OKD web console or the command line interface (CLI).
Uninstalling Forklift by using the OKD web console
You can uninstall Forklift by using the OKD web console to delete the konveyor-forklift
project and custom resource definitions (CRDs).
-
You must be logged in as a user with
cluster-admin
privileges.
-
Navigate to Home → Projects.
-
Enter
forklift
in the Search field to locate the konveyor-forklift project. -
On the right side of the project, select Delete Project from the Options menu
.
-
In the Delete Project pane, enter the project name and click Delete.
-
Navigate to Administration → CustomResourceDefinitions.
-
Enter
forklift
in the Search field to locate the CRDs in theforklift.konveyor.io
group. -
On the right side of each CRD, select Delete CustomResourceDefinition from the Options menu
.
Uninstalling Forklift from the CLI
You can uninstall Forklift from the command line interface (CLI) by deleting the konveyor-forklift
project and the forklift.konveyor.io
custom resource definitions (CRDs).
-
You must be logged in as a user with
cluster-admin
privileges.
-
Delete the project:
$ oc delete project konveyor-forklift
-
Delete the CRDs:
$ oc get crd -o name | grep 'forklift' | xargs oc delete
-
Delete the OAuthClient:
$ oc get oauthclient -o name | grep 'forklift' | xargs oc delete
Troubleshooting
Using must-gather
The must-gather
tool for Forklift collects logs, metrics, and information about Forklift custom resources.
-
You must be logged in to the KubeVirt cluster as a user with the
cluster-admin
role. -
You must have the OKD CLI (
oc
) installed.
-
Navigate to the directory where you want to store the
must-gather
data. -
Run the
oc adm must-gather
command with the--image
argument:$ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
The data is saved in a local
must-gather
directory. -
To view Prometheus metrics data:
-
Run the
oc adm must-gather
command with thegather_metrics_dump
script:$ oc adm must-gather --image quay.io/konveyor/forklift-must-gather:latest -- /usr/bin/gather_metrics_dump
This process might take a long time. The tool processes the most recent
prom_data.tar.gz
file in the/must-gather/metrics
directory. -
Create a local Prometheus instance to display the data:
$ make prometheus-run
-
After you have viewed the data, delete the Prometheus instance and data:
$ make prometheus-cleanup
-
Uninstalling Forklift by using the OKD web console
You can uninstall Forklift by using the OKD web console to delete the konveyor-forklift
project and custom resource definitions (CRDs).
-
You must be logged in as a user with
cluster-admin
privileges.
-
Navigate to Home → Projects.
-
Enter
forklift
in the Search field to locate the konveyor-forklift project. -
On the right side of the project, select Delete Project from the Options menu
.
-
In the Delete Project pane, enter the project name and click Delete.
-
Navigate to Administration → CustomResourceDefinitions.
-
Enter
forklift
in the Search field to locate the CRDs in theforklift.konveyor.io
group. -
On the right side of each CRD, select Delete CustomResourceDefinition from the Options menu
.
Uninstalling Forklift from the CLI
You can uninstall Forklift from the command line interface (CLI) by deleting the konveyor-forklift
project and the forklift.konveyor.io
custom resource definitions (CRDs).
-
You must be logged in as a user with
cluster-admin
privileges.
-
Delete the project:
$ oc delete project konveyor-forklift
-
Delete the CRDs:
$ oc get crd -o name | grep 'forklift' | xargs oc delete
-
Delete the OAuthClient:
$ oc get oauthclient -o name | grep 'forklift' | xargs oc delete