โลโก้ CISCO

ผู้จัดการเวิร์กโฟลว์ CISCO Crosswork

CISCO -Crosswork-Workflow-Manager-PRODUCT

Install CWM using Docker Installer Tool

ส่วนนี้ประกอบด้วยหัวข้อต่อไปนี้:
Install CWM using Docker Installer Tool, on page 1

Install CWM using Docker Installer Tool

The CWM 2.0 is installed on the Cisco Crosswork platform by first deploying the Crosswork OVA file using a Docker image on the VMware vCenter 7.0 (or higher) and then installing the CWM CAPP file using the installation script.

ข้อกำหนดเบื้องต้น

  • VMware vCenter Server 7.0 (U3p or later) and ESXi 7.0 (U3p or later). Refer to the Crosswork Network Controller 7.0 installation requirements for more details.
  • Docker version 19 or higher.
  • sshpass installed. For Mac, you can use brew install sshpass.

Use script to Deploy Crosswork and CWM

ขั้นตอน

  1. Step 1 In your Docker-capable machine, create a directory where you will store all the files you will use during this installation.
    บันทึก
    If you are using a Mac, ensure that the directory name is in lower case.
  2. Step 2 Download the OVA file containing the Crosswork Network Controller package from ซิสโก้ ดอท คอม to the directory you created. It will contain the Crosswork tar.gz CAPP file, the CWM .ova file, the install.sh installation script, the configuration.json file and Docker installer image tar.gz (along with this instruction).
  3. Step 3 Import the Docker installer image by running the following command. Be sure to adjust the image name as needed: docker image import <docker-image-name>.tar.gz your-image-name:your-tag
  4. Step 4 Inside the directory, create a .txt file and paste the VMware installation template given below. For this instruction,we’ll name the file deployment.tfvars.txt for exampวัตถุประสงค์
    Cw_VM_Image = “” # Line added automatically by installer.
    • ClusterIPStack = “IPv4”
    • DataIPNetmask = “255.255.255.0”
    • DataIPGateway = “192.168.1.1”
    • DNS = “DNS”
    • DomainName = “domain_name”
    • CWPassword = “your_crosswork_password”
    • VMSize = “XLarge”
    • vm_sizes = {
    • “xlarge” = {
    • vcpus = 24
    • cpu_reservation = 24000
    • //Memory in Mbytes
    • memory = 128000
    • }
    • }
    • NTP = “ntp.esl.cisco.com
    • Timezone = “Europe/Paris”
    • EnableSkipAutoInstallFeature = “True”
    • ManagementVIP = “your_mgmt_vip”
    • ManagementIPNetmask = “255.255.255.0”
    • ManagementIPGateway = “your_mgmt_gateway”
    • ThinProvisioned = “true”
    • DataVIP = “your_data_vip”
    • CwVMs = {
    • “0” = {
    • VMName = “your_VM_name”,
    • ManagementIPAddress = “your_mgmt_ip”,
    • DataIPAddress = “your_data_ip”,
    • NodeType = “Hybrid”
    • }}
    • VCenterDC = {
    • VCenterAddress = “your_vcenter_address”,
    • VCenterUser = “your_username”,
    • VCenterPassword = “your_password”,
    • DCname = “your_datacenter_name”,
    • MgmtNetworkName = “VM Network”,
    • DataNetworkName = “SVM Data Network”
    • VMs = [{
    • HostedCwVMs = [“0”],
    • Host = “your_VM_host”,
    • Datastore = “your_VM_datastore”,
    • HSDatastore = “your_VM_hsdatastore”
    • }
      ]}
    • SchemaVersion = “7.1.0”
    • บันทึก
    • Note the difference between your VCenter and Datacenter.
  5. Step 5 Edit the parameters to match your deployment.
    บันทึก
    To learn more about the installation parameters, please refer to the Single VM chapter in the Cisco Crosswork Network
    Controller 7.0 Installation Guide.
  6. Step 6 Inside the directory, create another file named product.json file and paste the data below.
    • {
    • “product_id”: “CWM”,
    • “attribute”: {
    • “key1”: “value1”,
    • “key2”: “value2”
    • }}
  7. Step 7 Open the configuration.json file and provide the following parameters to match your deployment:
    • {
    • “SVM_NAME”: “your_VM_name”,
    • “host”: {
    • “remote_user”: “your_username”,
    • “remote_password”: “your_password”,
    • “remote_host”: “your_scp_host”,
    • “remote_port”: “22”,
    • “capp_file”: “/path/to/capp_file.tar.gz”
    • },
    • “cwm_login”: {
    • “ip”: “your_mgmt_ip”,
    • “cwm_user”: “admin”,
    • “cwm_old_password”: “admin”,
    • “cwm_password”: “your_new_password”
    • },
    • “deployment”: {
    • “tfvars_path”: “/path/to/deployment.tfvars.txt”,
    • “ova_file”: “/path/to/cwm.ova”,
    • “product_json”: “/path/to/product.json”
    • }
    • }
    • For host, provide the details of the SCP server where your Crosswork CAPP file is located like host address and port, your username and password, and the path to the file.
    • For cwm_login, provide your management IP and the default Crosswork username and password. In cwm_password, provide the new password to replace the default one upon installation completion.
    • for deployment, provide the local paths to the deployment.tfvars.txt created in a previous step, to the CWM OVA file and to the product.json file.
  8. Step 8 From the directory, run the installer script: bash install.sh
    This will start the installation process for the Crosswork platform and then for CWM once the platform is deployed.
  9. Step 9 To follow the installation inside the Docker container, run the following command: sudo docker ps -a
    Copy the ID of the container in which the installation started. Usually its name contains the OVA filename, such as: cw-na-cwm-7.1.0-20-releasecnc710-250512-cwm-59-50
    To see the logs, run:  sudo docker logs your_container_id -f
  10. Step 10 Once the installation script is done and the deployment status reaches 100%, go to http://your_mgmt_vip_address:30603 and log in with the default admin user and the password you provided in configuration.json.

ระบบ

ส่วนนี้ครอบคลุมหัวข้อต่อไปนี้:
สถาปัตยกรรมเหนือview, ในหน้าที่ 5

สถาปัตยกรรมเหนือview

Cisco Crosswork Workflow Manager 2.0 architecture is a microservice-based solution that operates on top of the CNC platform. This section shows a diagram presenting its core architectural components along with short descriptions of each.

  • CISCO -Crosswork-Workflow-Manager (2) UI Server: Allows operators to add and instantiate workflows, enter workflow data, list running workflows, monitor job progress. The Administration section of the CNC UI enables users to add workers, manage worker processes and assign activities from adapters to workers.
  • REST API: Includes all interaction with the CWM application: deploying adapters, publishing and instantiating workflows, managing workers, resources and secrets.
  • API Server: Dispatches API requests to relevant microservices.
  • Engine: The core component that conducts how workflows are handled. It interprets and manages the execution of workflow definitions.
  • Engine Worker (Workflow Worker): Executes the workflow tasks. It receives the workflow tasks from the Engine, executes them in the correct order, and sends the results back to the Engine.
  • Worker Manager: Manages the Workflow Workers. It ensures that the correct number of workers are running and that they are properly configured.
  • Adapter Manager: Manages the adapters used by the system. It installs, configures, and updates adapters (“plugins”) and ensures that they are compatible with the system.
  • Event Manager: Manages incoming and outgoing events, dispatching them to correct event queues. Events are signals coming from external sources with which the workflows can interact.
  • Adapter SDK & XDK: Helps developers create new adapters to integrate with external systems. The XDK application extends the capabilities of the Adapter SDK to enable developers to automatically build interfaces and message logic for custom adapters.
  • Workflow Definitions: Workflow code written in the JSON format based on the Serverless Workflow specification.
  • Crosswork Network Controller (CNC): Runtime platform for the CWM application. It is a collection of services that provide the necessary infrastructure to support the deployment and management of the application within a Cluster deployment.
  • PostgreSQL: The database that the system uses to store and manage its data.
  • DSL Engine: Executes the Domain-Specific Language (DSL) used to define the workflows. It parses the DSL, generates the appropriate workflow code, and compiles it for execution.
  • Engine Matching: Matches incoming events with the appropriate workflow. It determines which workflow should execute based on the event data and the defined workflow constraints.
  • Engine History Tracks the history of executed workflows. It stores the metadata and execution details of all completed, running, and failed workflows.

เอพีไอ

ส่วนนี้ครอบคลุมหัวข้อต่อไปนี้:
CWM API Overview, ในหน้าที่ 7
Use the CNC Workflow Automation Postman collection, on page 7

CWM API Overview

Cisco developed the Cisco Crosswork Workflow Manager API based on Representational State Transfer (REST) design principles. You can access the API using HTTP and data files formatted using JSON. The API indicates the success or failure of a given request using relevant HTTP response codes. Data retrieval methods require a GET request, while methods for adding, changing, or deleting data require POST, PUT, PATCH, or DELETE methods, as appropriate. The API returns errors if you send requests using the wrong request type.
You can use the CWM API using a CWM 2.0 Postman collection in Postman.
For a full API reference, see the dedicated DevNet space: https://devnetapps.cisco.com/docs/crosswork/workflow-manager/introduction/

Use the CNC Workflow Automation Postman collection
Follow these steps to import the collection to the Postman application and set the development environmentn.

ก่อนที่คุณจะเริ่มต้น
Be sure that you have access to a Postman web application account or have installed the Postman desktop app. For details, see https://www.postman.com/downloads/
You must also download the CNC Workflow Automation Postman collection in JSON format by clicking this link and then unzip the archive to an accessible storage resource.

ขั้นตอน

  1. Step 1 Launch Postman and go to Collections.
  2. Step 2 Click Import, select folders from the Drop anywhere to import screen, and point to the folder that you unzipped from the CNC Workflow Automation Postman collection archive.
  3. Step 3 Go to Environments and select the newly imported test environment.
  4. Step 4 Provide current values for the baseUrl and endPoint variables to match the IP address and port of your CNC Workflow Automation instance. Save save the changes.

To get access to the CNC Workflow Automation API, use the baseurl/crosswork/cnc/v71/, where baseurl is the IP address and port number of your Crosswork Network Controller (CNC) instance with CNC Workflow Automation installed. For example: https://172.22.141.178:30603

กิจกรรม

ส่วนนี้ครอบคลุมหัวข้อต่อไปนี้:
Event handling overview, ในหน้าที่ 9
Define a Kafka event, on page 16

Event handling overview

The event handling mechanism enables CWM to interact with external brokers for handling outbound and inbound events. Workflows can act as either consumers or producers of events which can be used to initiate a new workflow, or signal an existing workflow. For each event type that you define, you can add correlation attributes for filtering events and routing them to the workflow waiting for the event containing specific attribute values.
Event messages need to be defined according to Cloud Events specification. See Event message format, on page 15 for more details.

Brokers and protocols

CWM supports the Kafka broker and the AMQP and HTTP protocols for handling events. Events can be either consumed by a workflow running inside CWM (incoming events forwarded by a broker) or produced by a running workflow and forwarded to an external system (outgoing events received by a broker).

It is important to remember that CWM doesn’t act as an event broker itself. It provides a means to connect to external brokers to forward messages and events.

Kafka broker
For the consume event type, CWM connects to a Kafka broker and listens for a specific event type on a topic. Once an event of the specific type registers to the right topic, CWM retrieves the event data and forwards it to the running workflow. The workflow then executes actions defined inside the Event State and/or runs another workflow execution (if selected).
For the produce event type, a running workflow produces a single event or a set of events which CWM then forwards to the broker and they get published in the right topic.

CISCO -Crosswork-Workflow-Manager (3)The Kafka broker will accept every event message format supported by the language-specific SDK as long as a valid content-type is sent. See this Github link for lists of supported formats: https://github.com/cloudevents/spec?tab=readme-ov-file

AMQP protocol (such as the RabbitMQ broker)
For the consume event type, CWM connects to an AMQP broker and listens for a specific event type on a queue. Similarly to the Kafka broker, when an event of the specific type registers to the right queue, CWM retrieves the event data and forwards it to the running workflow. The workflow  then executes actions defined inside the Event State and/or runs another workflow execution (if selected).
For the produce event type, a running workflow produces a single event or a set of events which CWM then forwards to the broker and they get published in the right queue.
AMQP brokers will accept every event message format supported by the specific SDK as long as a valid content-type is sent. The lists of supported event formats are available here: https://github.com/cloudevents/spec?tab=readme-ov-file

HTTP protocol
For the consume event type, CWM exposes an HTTP endpoint that listens for any incoming events. If an event of a specific type comes, it is forwarded to the running workflow that waits for this event type.
When events are consumed, CWM functions as the destination HTTP server. Therefore, the URL of the CWM server is what you effectively provide as the resource for the given HTTP event type.

  • Event messages need to be HTTP POST requests, and the message body needs to be in JSON format representing a Cloud Event:
  • { “specversion”: “1.0”,
  • “id”: “2763482-4-324-32-4”,
  • “type”: “com.github.pull_request.opened”,
  • “source”: “/sensors/tn-1234567/alerts”,
  • “datacontenttype”: “text/xml”,
  • “data”: “<test=\”xml\”/>”,
  • “contextAttrName”: “contextAttrValue” }
  • For produce events, a workflow produces an event in the Cloud Event format and CWM forwards it as an
  • HTTP POST request to an HTTP endpoint exposed by an external system. The HTTP endpoint address is a concatenation of the host URL defined in the Resource configuration in CWM and the End point field of the
  • Event definition inside the workflow definition. Inside the resource configuration, you can change the request method to PUT or other, and add key and value pairs as header (in JSON format):

CISCO -Crosswork-Workflow-Manager (4)

Event system configuration
The following topics cover the details of event configuration.

Event system configuration: secrets
In event configuration, secrets store credentials needed to enable connection to a broker or endpoint exposed by a third-party service that sends or receives events. This includes basic authentication: username and password. The Secret ID that you provide when creating a secret will be referenced when creating a resource, so you need to add a secret beforehand. For details, see Step 1: Create a Kafka secret, on page 16.

Event system configuration: resources
The resource is where you provide all the connection details (including the secret) needed to reach an event broker or endpoint exposed by a third party service. Depending on the broker/protocol you want to use, you can choose among three default event resource types

  • system.event.amqp.v1.0.0
  • system.event.kafka.v1.0.0
  • system.event.http.v1.0.0
  • Notice that there is a different set of configuration fields for each of them
  • For AMQP, provide the ServerDSN in the following format amqp //localhost 5723.
  • For Kafka:
  • KafkaVersion: Enter your Kafka version. The standard way to check the Kafka version is to run bin/kafka topics.sh version in a terminal.
  • Brokers: Enter your Kafka broker addresses in the following format [“localhost 9092”, “192.168.10.9 9092”].
  • OtherSettings: An editable list with default Kafka setting values. You can modify the values as needed. For details, see the “Kafka Other Settings” table below.

For HTTP:
Produce event types: Fill in the URL field and optionally, Method and Headers (for example, Client ID header name and value as a JSON object).
การ URL needs to be the address of the destination HTTP server, but without the URL path. You will enter the URL path as the End point when configuring the event type.

บันทึก
Consume event types: Fill in the URL field with the server URL of your CWM instance, for example, 192.168.10.9 9092.
Remember to provide the URL of your CWM instance without the URL path (/event/http). You will enter the URL path as the End point when configuring the event type.

Table 1: Kafka Other Settings

สนาม คำอธิบาย
รหัสลูกค้า The identifier used by Kafka brokers to track the source of requests
KafkaVersion Specifies the version of Kafka the client is compatible with (e.g., “2.0.0”)
MetadataFull When True, fetches metadata for all topics, not just those needed
AdminRetryMax Maximum number of retries for admin requests (e.g., creating/deleting topics)
NetSASLVersion Version of the SASL (Simple Authentication and Security Layer) protocol
AdminTimeoutSecs Timeout in seconds for admin requests (e.g., topic creation)
ConsumerFetchMin Minimum amount of data in bytes the broker should return to the consumer
Metadata Retry Max Maximum number of retries to fetch metadata (e.g., topic and partition info)
NetS ASL Hand shake When True, enables the SASL handshake mechanism
Net Dial Timeout Secs Timeout in seconds for establishing a connection to Kafka
Net Read Time out Secs Timeout in seconds for reading data from Kafka
Net Write Timeout Secs Timeout in seconds for writing data to Kafka
Producer Timeout Secs Timeout in seconds for producing messages to Kafka
Consumer Fetch Default Default size in bytes for the consumer fetch request (e.g., 1MB)
Producer Required Acks Specifies the required number of acknowledgments from brokers for a message to be considered successful (e.g., “WaitForLocal”)
Producer Return Errors When True, enables error reporting for failed produce requests
Consumer Isolation Level Specifies whether the consumer reads uncommitted or committed messages (“ReadUncommitted” allows reading in-progress transactions)
Consumer Off setsInitial Initial offset when there is no committed offset (-1 for the latest)
สนาม คำอธิบาย
Net Max Open Requests Secs Maximum time for open requests over the network

ประเภทกิจกรรม

  • To create a new event type, you need to have a resource and secret added to CWM.
  • The following fields are available when adding an event type:
  • Event type name: the name of your event type. It’s later referred to inside the workflow definition.
  • Resource: a list of resources previously added to CWM.
  • Event source: a fully user-defined entry that will be referenced in the workflow definition. Required for produce event kind.
  • End point: the name of Kafka topic (event stream), AMQP endpoint (terminus), or HTTP URL (Host) path.
  • Note For the HTTP consume event type, provide /event/http as your End point.
  • Select kind: a list consisting of two options: consume or produce event kind.
  • Note The both option is not yet supported for CWM.
  • Start listener (only for consume kind): check it to start listening for the defined event type.
  • Run job (only for consume kind): tick this checkbox if you want to trigger a workflow upon receiving the event. Then select the desired workflow from the list.

Correlation attributes
Optionally, you can set context attributes for your event. They apply only to the consume event kind and are used to trigger workflows selectively. You can view them as a kind of custom filters that refine the inbound event data and route them to the right workflows that listen on event types with specific values of correlation attributes.
To add an attribute to your event type, click Add attribute, and provide an attribute name.
Correlation attributes are fully user-defined. They need to match the JSON key and value pair stated inside the Cloud event message that is to be routed to a given workflow.

Event message format
Event messages must follow the Cloud Events specification format. A minimum viable event message following the specification will contain the following parameters:

  • {
    -
  • specversion”: “1.0”,
  • "id": "00001",
  • “type”: “com.github.pull_request.opened”,
  • “source”: “/sensors/tn-1234567/alerts”
  • }
  • The message can carry additional parameters, such as “datacontenttype”, “data”, and a correlation context attribute name (contextAttrName in this exampเลอ) :
  • {
  • “specversion”: “1.0”,
  • “id”: “2763482-4-324-32-4”,
  • “type”: “com.github.pull_request.opened”,
  • “source”: “/sensors/tn-1234567/alerts”,
  • “datacontenttype”: “text/xml”,
  • “data”: “<test data=\”xml\”/>”,
  • “contextAttrName”: “contextAttrValue”
  • }
  • Workflow event definition and state
  • In the workflow definition, there are two major syntactical elements that you use to handle the events for which the workflow will be waiting. These are:
  • The Event definition: Used to define the event type and its properties. For exampเลอ:
  • {
  • “name”: “applicant-info”,
  • “type”: “org.application.info”,
  • “source”: “applicationssource”,
  • “correlation”: [
  • {
  • “contextAttrName”: “applicantId”
  • }
  • ]
  • }
  • • The Event state: Used to define actions to be taken when the event occurs. For exampเลอ:
  • {
  • “name”: “MonitorVitals”,
  • “type”: “event”,
  • “onEvents”: [
  • {
  • “actions”: [
  • {
  • “functionRef”: {
  • “refName”: “uppercase”,
  • “ข้อโต้แย้ง”: {
  • “input”: {
  • “in”: “patient ${ .patient } has high temperature”
  • }
  • }
  • }
  • }
  • ],
  • “eventRefs”: [
  • “HighBodyTemperature”
  • ]
  • }
  • ]
  • }
  • Define a Kafka event
  • In the following topics, we will create a Kafka event and add it to a new workflow. The only pre-requisities are that we must have:
  • A fully set-up Kafka service.
  • CWM installed.
  1. Step 1: Create a Kafka secret
    To enable a secure connection to the Kafka service, you need to create a secret with Kafka credentials and a resource with connection details.

ขั้นตอน

สั่งการ or การกระทำ วัตถุประสงค์
ขั้นตอนที่ 1 In CNC, select การบริหาร > การบริหารเวิร์กโฟลว์ > ความลับ.
ขั้นตอนที่ 2 คลิก Add Secret.
ขั้นตอนที่ 3 ใน New secret view, specify the following:
ขั้นตอนที่ 4 After selecting the secret type, a set of additional fields is displayed under the ความลับ type details section. Fill in the fields:
ขั้นตอนที่ 5 Click Create Secret.

Step 2: Create a Kafka resource
You also need to create a resource with connection details.

สั่งการ or การกระทำ วัตถุประสงค์
ขั้นตอนที่ 1 In CNC, select การบริหาร > การบริหารเวิร์กโฟลว์ > ทรัพยากร.
ขั้นตอนที่ 2 คลิก เพิ่มทรัพยากร.

CISCO -Crosswork-Workflow-Manager (5)

Step 3: Add the event type
When you have the secret and resource in place, it’s time to specify the type of event that will be consumed or produced.

ขั้นตอน 

สั่งการ or การกระทำ วัตถุประสงค์
ขั้นตอนที่ 1 In CNC, select การบริหาร > การบริหารเวิร์กโฟลว์ > ประเภทกิจกรรม.
ขั้นตอนที่ 2 คลิก Add event type.
ขั้นตอนที่ 3 ใน New event type window, provide the required input:

 

CISCO -Crosswork-Workflow-Manager (1)

Step 4: Define the event in a workflow
Now that we have the event type added, we can create a workflow that registers for this event type and executes an action when the event is received by CWM. To do so, we’ll need to:

  1. Define the event using an Event definition.
  2. Specify the Event state
  3. Define the actions to be taken when the event occurs.

ในฐานะอดีตample, let’s take a scenario where a router overheating alarm (an inbound event) triggers a single workflow event state and defines two remediation actions to be executed in response to that state.

  • {
  • “id”: “HighRouterTempWorkflow”,
  • “name”: “Router Overheating Alarm Workflow”,
  • “start”: “RemediateHighTemp”,
  • “events”: [
  • “kind”: “consumed”,
  • “name”: “HighRouterTemp”,
  • “type”: “HighRouterTemp”,
  • “source”: “monitoring.app”
  • }
  • ],
  • “states”: [
  • {
  • “end”: {
  • “terminate”: true
  • },
  • “name”: “RemediateHighTemp”,
  • “type”: “event”,
  • “onEvents”: [
  • {
  • “actions”: [
  • {
  • “functionRef”: {
  • “refName”: “DispatchTech”,
  • “contextAttributes”: {
  • “RouterIP”: “${ .RouterIP }”
  • },
  • “resultEventTimeout”: “PT30M”
  • }
  • }
  • ],
  • “eventRefs”: [
  • “HighRouterTemp”
  • ]
  • },
  • {
  • “actions”: [
  • {
  • “functionRef”: {
  • “refName”: “MoveTraffic”,
  • “contextAttributes”: {
  • “RouterIP”: “${ .RouterIP }”
  • },
  • “resultEventTimeout”: “PT30M”
  • }
  • }
  • ],
  • “timeouts”: {
  • “actionExecTimeout”: “PT60M”
  • }
  • }
  • ]
  • }
  • ],
  • “เวอร์ชัน”: “1.0.0”
  • “description”: “Remediate router overheating”,
  • “specVersion”: “0.8”

บันทึก
อดีตนี้ample is not a complete workflow. It is an example of how to define an event inside a workflow, set a simple state, and then define actions to take in response to that single state. A realistic workflow can define many more states and actions to take in response to each of those states.

เอกสาร / แหล่งข้อมูล

ผู้จัดการเวิร์กโฟลว์ CISCO Crosswork [พีดีเอฟ] คู่มือการใช้งาน
ผู้จัดการเวิร์กโฟลว์ Crosswork, ผู้จัดการเวิร์กโฟลว์, ผู้จัดการ

อ้างอิง

ฝากความคิดเห็น

ที่อยู่อีเมลของคุณจะไม่ถูกเผยแพร่ ช่องที่ต้องกรอกข้อมูลมีเครื่องหมาย *