Its taken me a while to get this post out. I have been swamped with multiple projects in the realm of automation and orchestration for Software-Defined Storage. With that said, lets get into how to create wrapped vCO workflows to call OnCommand Workflow Automation (WFA) workflows.
First, we should have a discussion on workflows by looking at the creation of a specific wrapped workflow. We will use the creation of a VMware datastore as an example for taking using smaller, more specific workflows to create a larger, more complex workflow. In this example the end goal for the larger workflow would be to have a datastore with the following characteristics enabled: NFS, 10GB in size, deduplication enabled, QoS of 13MB/s enabled, a snapshot policy set, and added to each host in the cluster. We can accomplish this in two different ways. First we can create all of the smaller workflows needed to create the overall larger workflows. To To accomplish this the following small storage workflows would need to be created:
- Create a Clustered Data ONTAP NFS Volume
- Add Deduplication to Volume
- Add QoS Policy to a Volume
- Create a Snapshot Policy for Volume
- Create SnapMirror
- Create SnapVault
The second method for creating a larger workflow would be to first create environment “packs” workflows. An environment “pack” would add characteristics for specific environments like Oracle, Microsoft Exchange, or VMware to volumes as well as both backup and recovery and disaster recovery options. With this method the workflows needed would be as follows:
- Create a Clustered Data ONTAP NFS Volume
- Add VMware Properties to Volume (would include deduplication, thin provisioning, and a predetermined QoS policy)
- Add Backup and Recovery and Disaster Recovery Options to Volume (includes SnapShot policy, SnapMirror replication, and SnapVault Backup)
Obviously, this decreases the number of workflows needed to create specific volumes for specific workflows but slightly increases the complexity of the baseline workflows used. This method does still allow for a modular approach to creating larger workflows as well.
Pictured below is a series of workflows we have developed as part of a sample pack. Notice that the first four workflows above are workflows listed in the sample pack below.
Now lets look at how we accomplish the creation of the larger workflows. This larger workflows in vCO is known as a wrapped workflow since you are wrapping multiple smaller workflows into a larger workflow.
Building the Wrapper Workflow
Note: The below steps are merely and example of how to create a wrapper workflow from three other vCO workflows that call WFA. Each workflow is a different entity unto itself. Please use the below steps as a guide for building your own but not as an exact “how to” for every workflow.
Start by opening a baseline workflow and click on edit. In this case we will open Create a Clustered Data ONTAP NFS Volume. This specific workflow creates a new NFS volume and either uses a default QoS policy or creates a new QoS policy.
As a recommendation it is suggested you make a copy of the baseline workflow to work with instead of using the actual workflow. Right click on the workflow you will be making a duplicate of, click “Duplicate workflow”. At the Duplicate workflow screen rename the workflow and place it in the correct folder.
Once you have done this right click on the duplicated workflow and click Edit. Go into the Schema tab and then on the left side menu bar select “All Workflows”.
Drag and drop Add Deduplication to Volume(s) and Add Thin Provisioning to Volume(s) workflows into the schema after the NetApp WFA Workflow Execution. Ignore the “Do you want to add the activity’s parameters as input/output to the current workflow” question at the top fo the screen when you do this.
Next, click on the edit button on the Add Deduplication to Volume(s) workflow that was added to the overall workflow. This will bring up the edit screen.
Click on “not set” next to clusterName. Then select ClusterName in the Chooser… screen. Then click Select. Perform the same step for vserverName, but select VserverName in the Chooser.. screen and then click Select.
For volumeGroup, click on “not set”. At the Chooser.. screen select Create parameters/attibutte in workflow (highlighted in Red below)
In the Parameter information screen, enure that the Name says volumeGroup and then click Ok.
For volumeList click on “not set”. volumeList is found in both of the new workflows that are being wrapped as a part of the larger workflow. volumeList refers back to and is equal to VolumeName. Therefore, at the Chooser… screen select VolumeName and then click Select.
When the final Local Parameter has been mapped, click on Close. You will be then be taken back to the Schema screen.
You will now need to edit Bind Inputs. Click on the edit button for Bind Inputs.
At the Edit screen, click on the IN tab at the top. Then click on Bind to workflow/parameter/attribute button.
At the Chooser… screen, click on volumeGroup and then click Select. volumeGroup will now be displayed In tab as a parameter.
Next, click on the General tab. You will see volumeGroup listed under Attibutes. This will need to be moved from Attributes to be an Input parameter. Right click on volumeGroup and select Move as INPUT parameter.
Click on the Inputs tab and verify that volumeGroup has now been moved.
The workflow should now be ready to test.
Pretty easy, eh? By following the above procedures you too will be able to make your own wrapper workflows and extend the storage automation and orchestration into your SDDC.
Here are the links for the Software-Defined Storage with NetApp and VMware series:
As always I appreciate the time you have taken to read this post and I’d love to get your feedback and hear what you’d like to see. Workflow ideas, blog post ideas, and general comments are all welcome.
Thanks for reading!