From 462dc71338b4dce48bfcfea9b9eb0c4193d024a4 Mon Sep 17 00:00:00 2001 From: mrpa Date: Tue, 23 Nov 2021 18:45:40 +0100 Subject: Removed the obsolete appendix files from the example usecase manual, Updated the Automation UG, Updated the Makefile build components. USERDOCAP-733,USERDOCAP-740 Change-Id: I1c8d19a4cd496c10a982a9965793ffb19c12ab12 Signed-off-by: mrpa --- .../doc/automation_framework_test_harness.xml | 210 +++++++------ .../doc/appendix_1.xml | 63 ---- .../doc/appendix_2.xml | 326 --------------------- .../doc/appendix_3.xml | 7 - .../doc/appendix_4.xml | 104 ------- .../doc/appendix_5.xml | 244 --------------- doc/book-enea-edge-example-usecases/doc/book.xml | 4 - 7 files changed, 116 insertions(+), 842 deletions(-) delete mode 100644 doc/book-enea-edge-example-usecases/doc/appendix_1.xml delete mode 100644 doc/book-enea-edge-example-usecases/doc/appendix_2.xml delete mode 100644 doc/book-enea-edge-example-usecases/doc/appendix_3.xml delete mode 100644 doc/book-enea-edge-example-usecases/doc/appendix_4.xml delete mode 100644 doc/book-enea-edge-example-usecases/doc/appendix_5.xml diff --git a/doc/book-enea-edge-automation-user-guide/doc/automation_framework_test_harness.xml b/doc/book-enea-edge-automation-user-guide/doc/automation_framework_test_harness.xml index 5cc879d..4268582 100644 --- a/doc/book-enea-edge-automation-user-guide/doc/automation_framework_test_harness.xml +++ b/doc/book-enea-edge-automation-user-guide/doc/automation_framework_test_harness.xml @@ -240,14 +240,11 @@ Example: for a more extended output, change the output from selective to debug. - The test_harness directory contains all the - implemented playbooks. This directory is structured in multiple - subdirectories, each subdirectory represents a functionality of the Enea - Edge Management application. Each implemented playbook from this - directory runs a method from a Python class from the - automation_framework directory. Each playbook is an - atomic operation, a basic operation that need to be tested. These - playbooks are used in complex test scenarios. + The test_harness directory contains the custom + Ansible modules, located in the modules directory, and the playbooks + mapping functionalities from the Enea Edge Management application. The + playbooks are atomic operations that can be used in complex test + scenarios.
@@ -1299,41 +1296,81 @@ Parameters(not needed): -- <Automation-installerdir>/test_harness directory. + The Ansible based Test Harness represents an example of structuring + the files needed for creating automated test cases using the Enea Edge + Automation, and provides a way to implement them. + + The ansible.cfg file contains an example of the + Ansible default configuration. The default value for + stdout_callback is set to selective, + to print only certain tasks. It is recommended to switch to + debug when a test fails. By setting the parameter + any_errors_fatal to True, task + failures are considered fatal errors and the play execution stops. + + All the Playbooks that execute Automation Framework Python modules + run on localhost. New entries have to be created for + direct communication over SSH with the boards. + + The setup_env.sh script sets up the + testHarness test environment by creating the + testHarness-venv Python virtual environment, executing + requests needed by Automation Framework Python modules, and installing + Ansible. The Ansible package version is 2.9.6. + +
+ Ansible modules + + The custom Ansible modules can be found in the + test_harness/modules/ directory. The Ansible modules + are wrappers of Python Handlers from the + automation_framework/ directory. Each Ansible module + has been designed to instantiate a Python handler and to run a method of + it with specific parameters. + + The Ansible modules are used in Ansible playbooks as: + + <name_of_the_module>: + method: <which method will be called> + <param1>: <value_of_param1> + <param2>: <value_of_param2> + ... + <paramx>: <value_of_paramx> + chdir: <the path where the Ansible playbook will be run> + + The method is a mandatory argument for all Ansible modules, the + other arguments are optional. The method must exist in the + <name_of_the_module> Ansible module. The params + list (param1, param2... paramx) must have the same name as the + parameters of the method, otherwise an error will occur. + + For example, for adding a uCPE Device into the Enea Edge + Management application from an Ansible playbook, the + uCPEDeviceModule should be used as: + + uCPEDeviceModule: + method: addDevice + device_name: <name_of_the device> + chdir: "{{ lookup('env','BASE_DIR') }}" + + This will call the addDevice method from the + uCPEDevice class that instantiates a + uCPEDeviceHandler object for the + device_name uCPE Device. Afterwards, the + addDevice method is called. + + The list of possible parameters for all methods of an Ansible + module is saved in the module_args dictionary in + the Python script. The default value of all parameters is + None. +
+
Individual Ansible Playbooks - The Ansible based Test Harness represents an example of - structuring the files needed for creating automated test cases using the - Enea Edge Automation, and provides a way to implement them. - - The ansible.cfg file contains an example of - the Ansible default configuration. The default value for - stdout_callback is set to - selective, to print only certain tasks. It is - recommended to switch to debug when a test fails. By - setting the parameter any_errors_fatal to - True, task failures are considered fatal errors and - the play execution stops. - - All the Playbooks that execute Automation Framework Python modules - run on localhost. New entries have to be created for - direct communication over SSH with the boards. - - The setup_env.sh script sets up the - testHarness test environment by creating the - testHarness-venv Python virtual environment, - executing requests needed by Automation Framework Python modules, and - installing Ansible. The Ansible package version is 2.9.6. - The test_harness directory contains all the - implemented Ansible Playbooks. This directory contains the - check_error.yml Playbook and many subdirectories, - each subdirectory representing an Enea Edge Management module. - - The check_errors.yml Playbook checks the - Python output and returns success or fail results. This file is imported - in all playbooks from the test_harness directory and - it cannot be run standalone. + implemented Ansible Playbooks and many subdirectories, each subdirectory + representing an Enea Edge Management module. According to their functionality, the Ansible Playbooks that refer to offline configuration are in the OfflineConfig @@ -1360,61 +1397,46 @@ Parameters(not needed): -- Example: - - - Display the help menu for - addDataPlaneOvsBridge.yml: - - ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml - -This playbook runs 'addDataPlaneOvsBridge' method from uCPEDeviceHandler module - - The Python module will be run as: - - Usage: uCPEDeviceHandler.py [OPTIONS] [PARAMETERS]... - - Options: - -v, --verbose Get info about the methods and parameters - -d, --device_name TEXT The uCPE Device name - -m, --method TEXT The atomic operation you want to run - -f, --config_file TEXT The config file for NIC or bridges (optional - argument) - - -o, --display_output Display output of the method - --help Show this message and exit. - -Usage: - ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e - "device=<device_name> bridge_config_file=<bridge_config_file>" - ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e - "device=<device_name> bridge_name=<bridge_name> bridge_type=integration" - ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e - "device=<device_name> bridge_name=<bridge_name> bridge_type=communication" - ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e - "device=<device_name> bridge_name=<bridge_name> bridge_type=communication - interfaces=<interfaces>" - - - - Run the addDataPlaneOvsBridge.yml - playbook: - - ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e -"device=inteld1521-17 bridge_config_file=sfc_br.json" -ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e -"device=inteld1521-17 bridge_config_file=lan_br.json" -ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e -"device=inteld1521-17 bridge_name=sfc_br bridge_type=integration" -ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e -"device=inteld1521-17 bridge_name=wap_br bridge_type=communication" -ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e -"device=inteld1521-17 bridge_name=lan_br bridge_type=communication -interfaces=eno4" -ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml -e -"device=inteld1521-17 bridge_name=lan_br bridge_type=communication -interfaces=eno4,eno5" - - + Display the help menu for + addDataPlaneOvsBridge.yml: + + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml +. +# fail ************************************************************* + * localhost - FAILED!!! ------------------------- + The 'device' and 'bridge_config_file' or 'bridge_name' and \ +'bridge_type' parameters are mandatory + Usage: + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=<device_name> bridge_config_file=<bridge_config_file>" + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=<device_name> bridge_name=<bridge_name> bridge_type=integration" + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=<device_name> bridge_name=<bridge_name> bridge_type=communication" + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=<device_name> bridge_name=<bridge_name> \ +bridge_type=communication interfaces=<interfaces>" + + Example of usage: + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=inteld1521-17 bridge_config_file=sfc_br.json" + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=inteld1521-17 bridge_config_file=lan_br.json" + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=inteld1521-17 bridge_name=sfc_br bridge_type=integration" + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=inteld1521-17 bridge_name=wap_br bridge_type=communication" + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=inteld1521-17 bridge_name=lan_br \ +bridge_type=communication interfaces=eno4" + ansible-playbook test_harness/uCPEDevice/addDataPlaneOvsBridge.yml \ +-e "device=inteld1521-17 bridge_name=lan_br \ +bridge_type=communication interfaces=eno4,eno5" + + +# STATS ************************************************************ +localhost : ok=1 changed=0 failed=1 unreachable=0 \ +rescued=0 ignored=0 Each Ansible Playbook from the test_harness directory represents an atomic operation, a basic functionality that is diff --git a/doc/book-enea-edge-example-usecases/doc/appendix_1.xml b/doc/book-enea-edge-example-usecases/doc/appendix_1.xml deleted file mode 100644 index 8c76884..0000000 --- a/doc/book-enea-edge-example-usecases/doc/appendix_1.xml +++ /dev/null @@ -1,63 +0,0 @@ - - - How to create a 128T cloud-init iso image (day-0 - configuration) - - Prerequisites: - - Development host with Linux shell. - - - - genisoimage tool installed. - - - - Unpack the 128T/128t-cloud-init-example.tar.gz - archive and check the README file for more details: - - >tar -zxf 128t-cloud-init-example.tar.gz ->cd 128T/cloud-init-example/ ->ls ./ -README -user-data -meta-data -t128-running.xml - - To generate the cloud-init iso image: - - >genisoimage -output centos_128t_ci.iso -volid cidata -joliet \ --rock user-data meta-data t128-running.xml - - Notes: - - user-data and meta-data - files must be kept unchanged. - - - - To update the 128T configuration change the - t128-runing.xml file. - - - - XML is the same file downloaded from 128T web access: - configuration -> Import and Export Configuration -> - Export Configuration -> Download Configuration. The - configuration can be updated from a web interface, downloaded onto the - development host and used in generating a new cloud-init iso - image. - - - - By default, t128-running.xml is configured to pass - all traffic from the LAN to the WAN interface. There is only one change - required for the 128T VNF to work on the user's network: - - <rt:next-hop>172.24.15.254</rt:next-hop> - - Please change <172.24.15.254> with the IP address of your - Gateway in the t128-running.xml file and generate a new - iso image as described above. For more details about configuring the 128T - VNF please contact 128 Technologies. - \ No newline at end of file diff --git a/doc/book-enea-edge-example-usecases/doc/appendix_2.xml b/doc/book-enea-edge-example-usecases/doc/appendix_2.xml deleted file mode 100644 index 7ef7c41..0000000 --- a/doc/book-enea-edge-example-usecases/doc/appendix_2.xml +++ /dev/null @@ -1,326 +0,0 @@ - - - How to create the 128T image for NFV Access - - The following steps were used by Enea to generate the 128T qcow2 image - used as the VNF image on NFV Access. - - - Keep in mind a Virtual Machine was used instead of a physical - host. - - - Prerequisites: - - 128T-3.2.7-1.el7.centos.x86_64.iso provided - by 128 Technologies. - - - - A Linux development host with internet access. - - - - A least one of the TAP interfaces connected to a bridge with - Internet access. - - How to create the 128T image for NFV - Access: - - >qemu-img create -f qcow2 128t.qcow2 128G ->qemu-system-x86_64 -enable-kvm -m 8G -cpu host -smp cores=3,sockets=1 \ --M q35 -nographic bios /usr/share/qemu/bios.bin -boot order=d,menu=on \ -cdrom 128T-3.2.7-1.el7.centos.x86_64.iso \ -hdb 128t.qcow2 \ -device e1000,netdev=net1,mac=52:52:01:02:03:01 \ -netdev tap,id=net1,ifname=tap1,script=no,downscript=no - - - - Press the <ENTER> key to begin the installation - process. - - - - Wait for the distribution and the 128T to install: - - ------------------------------ -128T Packages Installed - -Please Remove Install Media, - -then enter <Yes> to reboot and -continue install process - - <Yes> <No> ------------------------------- - - Press Yes. - - - - Wait to reboot and press CTR+ a+c to enter - the qemu monitor: - - (qemu) quit - - - - Start qemu only with the qcow2 image attached, no installer - image required: - - >qemu-system-x86_64 -enable-kvm -m 8G -cpu host -smp cores=3,sockets=1 \ --M q35 -nographic bios /usr/share/qemu/bios.bin \ --boot order=c,menu=on \ --hda 128t.qcow2 \ --device e1000,netdev=net1,mac=52:52:01:02:03:01 \ --netdev tap,id=net1,ifname=tap1,script=no,downscript=no - ------------------------------------------------------------------------------- -Booting from Hard Disk... -. - - * CentOS Linux (3.10.0-514.2.2.el7.x86_64) 7 (Core) - CentOS Linux (0-rescue-4e73a369e89e466a888c9c77655a1d65) 7 (Core) - - - Use the ^ and v keys to change the selection. - Press 'e' to edit the selected item, or 'c' for a command prompt. ------------------------------------------------------------------------------- - - Select the first option. - - - - |-------------------128T Installer-------------------| -| | -| Configure Linux Networking | -| | -| Before 128T SetUp? | -| | -| | -| < Yes > < No > | -|----------------------------------------------------| - - Select NO. - - - - |----------------------------------------------------| -| Please select a role for this node: | -| |----------------------------------------------| | -| | (*) Router | | -| | ( ) Conductor | | -| |----------------------------------------------| | -| | -|----------------------------------------------------| -| < OK > < Back > | -|----------------------------------------------------|Select - Router and OK. - - - - |-------------------Conductor Info-------------------| -| | -| |----------------------------------------------| | -| |1st Conductor Address | | -| |Conductor Address | | -| |----------------------------------------------| | -| | -|----------------------------------------------------| -| < OK > < Skip > < Back > < Help > | -|----------------------------------------------------| - - Select SKIP. - - - - |----------------------HA Setup----------------------| -| What kind of Router node is this? | -| |----------------------------------------------| | -| |(*) Standalone No HA peer | | -| |( ) 1st HA Node HA peer is not set up | | -| |( ) 2nd HA Node HA peer is already set up | | -| |----------------------------------------------| | -| | -| | -|----------------------------------------------------| -| < OK > < Back > | -|----------------------------------------------------|Select - Standalone and OK. - - - - |---------------------Node Info----------------------| -| |----------------------------------------------| | -| | Node Role Router | | -| | Node Name 128tNode | | -| | Router Name 128tRouter | | -| |----------------------------------------------| | -| | -|----------------------------------------------------| -| < OK > < Advanced > < Back > < Help > | -|----------------------------------------------------| - - Enter a name for the router and node, press OK. - - - - |-------------------Password Setup-------------------| -| Enter the new password for the 128T 'admin' | -| user: | -| |----------------------------------------------| | -| | 128Tadmin | | -| |----------------------------------------------| | -| | | -|----------------------------------------------------| -| < OK > < Back > | -|----------------------------------------------------| - - Enter the password for web access: 128Tadmin - and confirm the password. - - - - |--------------------------Anonymous Data Collection--------------------------| -| The 128T Networking Platform comes packaged with a software process | -|("Roadrunner") that is used to proactively monitor the health and liveliness | -|of the 128T Router and associated components. This watchdog process collects | -|anonymous information from the router and sends it to 128 Technology for | -|storage and analysis. This information helps inform 128 Technology about | -|software usage, to aid in the support and improvement of the 128 Technology | -|Networking Platform. | -| | -|Disabling this feature will prevent the sending of anonymous usage data to | -|128 Technology. | -| | -| | -| < Accept > < Back > < Disable > | -|-----------------------------------------------------------------------------| - - Select Accept. - - - - |-----128T Statistics Table Creator-----| -| Created table for metric 760/827 | -| Created table for metric 770/827 | -| Created table for metric 780/827 | -| Created table for metric 790/827 | -| Created table for metric 800/827 | -| Created table for metric 810/827 | -| Created table for metric 820/827 | -| Finished pre-creating stats tables | -| Creating tables for audit events | -| Finished creating audit event tables | -| Completed in 27.001386642456055 s | -| Shutting down local Cassandra node | -|---------------------------------------| -| < OK > | -|---------------------------------------| - - Select OK. - - - - |--------128T Installer Status----------| -| | -| Install SUCCESS | -| | -| Start 128T Router | -| before proceeding to | -| login prompt? | -|---------------------------------------| -| < Yes > < No > | -|---------------------------------------| - - Select: Yes - - - - localhost login: root -Password: - - The following user accounts and passwords are created during the - ISO installation process: - - - Accounts Created - - - - - - - User - - Password - - - - - - root - - 128tRoutes - - - - t128 - - 128tRoutes - - - -
-
- - - GUI login via HTTPS is enabled by default on port 443 - - [root@localhost ~]# dhclient enp0s2 -[root@localhost ~]# echo "nameserver 8.8.8.8" >>/etc/resolv.conf -[root@localhost ~]# yum -y install cloud-init -[root@localhost ~]# reboot - - - - Wait to reboot and press CTR+ a+c to enter in qemu - monitor. - - (qemu) quit -> qemu-img info 128t.qcow2 -image: 128t.qcow2 -file format: qcow2 -virtual size: 128G (137438953472 bytes) -disk size: 5.4G -cluster_size: 65536 -Format specific information: - compat: 1.1 - lazy refcounts: false - refcount bits: 16 - corrupt: false - - - - Compress the generated 128t.qcow2 image to - decrease the size of VNF image: - - qemu-img convert -O qcow2 -c 128t.qcow2 centos_128t_compressed.qcow2 - -> qemu-img info centos_128t_compressed.qcow2 -image: centos_128t_compressed.qcow2 -file format: qcow2 -virtual size: 128G (137438953472 bytes) -disk size: 1.2G -cluster_size: 65536 -Format specific information: - compat: 1.1 - lazy refcounts: false - refcount bits: 16 - corrupt: false - -centos_128t_compressed.qcow2 - Resulted image can be used in NFV Access. - -
-
\ No newline at end of file diff --git a/doc/book-enea-edge-example-usecases/doc/appendix_3.xml b/doc/book-enea-edge-example-usecases/doc/appendix_3.xml deleted file mode 100644 index 063483a..0000000 --- a/doc/book-enea-edge-example-usecases/doc/appendix_3.xml +++ /dev/null @@ -1,7 +0,0 @@ - - - How to configure Fortigate VNF (day-0 configuration) - - Please check the README file from Fortigate folder for more - details. - \ No newline at end of file diff --git a/doc/book-enea-edge-example-usecases/doc/appendix_4.xml b/doc/book-enea-edge-example-usecases/doc/appendix_4.xml deleted file mode 100644 index f554b37..0000000 --- a/doc/book-enea-edge-example-usecases/doc/appendix_4.xml +++ /dev/null @@ -1,104 +0,0 @@ - - - Running Enea Edge Automation Framework and Test Harness - - For more detailed information regarding the Enea Edge Automation - Framework and Test Harness please see the . - - The most relevant information from the Enea Edge Automation Framework - and Test Harness structure is presented below: - - |---automation_framework -| |---unittestSuite -| | |---128tCleanup.json - Use case 1 - clean up - test. -| | |---128tDeploy.json - Use case 1 - test. -| | |---128t_FG_SFCCleanup.json - Use case 2 - clean up - test. -| | |---128t_FG_SFCDeploy.json - Use case 2 - test. -| | |---config -| | | |---cust - - Folder containing the configuration files used by tests. -| | |---unittestLoader.py -| | |---unittestSuite.py -|---lab_config -| |---trgt-1 -| | |---ibm_br.json - In-band management definition. -| | |---lan_br.json - Lan bridge definition. -| | |---target.json - - Target definition - the "address", "deviceId", "name" and \ - "version" must be updated. -| | |---sfc_br.json - Service chain bridge definition. -| | |---vnf_mgmt_br.json - VNF management bridge definition. -| | |---lan_nic.json - NIC definition. -|---vnf_config -| |---128t -| | |---128tInstance.json - 128T instantiation - used in use case 1. -| | |---128t.json - 128T onboarding. -| | |---128tSFCInstance.json - 128T instantiation - used in use case 2. -| | |---centos_128t_internet_ci.iso - 128T cloud init (day-0) iso image. -| |---fortigate -| | |---fg_basic_fw.conf - Fortigate day-0 configuration. -| | |---fortigateInstance.json - Fortigate instantiation. -| | |---fortigate.json - Fortigate onboarding. -| | |---fortigateLicense.lic - - Fortigate license - contact Fortinet to get a VNF image and license file. -|---vnf_image -| |---centos_128t_with_ci.qcow2 - Contact 128 Technology to get a \ - VNF image and its license file. -| |---fortios.qcow2 - Contact Fortinet to get a VNF image \ - and its license file. - - Make sure to update the relevant configuration file for your setup. - The essential files to consider are the uCPE Device configuration - (target.json), the license for the Fortigate VNF, and - the 128T cloud-init iso image matching your network. - - For uCPE Device configuration (target.json) - please change the following information, if needed, in the JSON file: - - - - address - The IP address of uCPE Device. - - - - version - The Enea Edge Runtime version. - - - - deviceId - The device ID of uCPE Device. - - - - name - The name of uCPE Device. - - - - - Before starting the two use-cases detailed in the following appendix, - the uCPE Device needs to be added into the Enea Edge Management - application. - - - To properly set up the Enea Edge Automation Framework and Test Harness - please see Installation and Initial Setup in the for - more details. - - To run a test: - - > cd automation_framework/unittestSuite/ -> python unittestSuite.py -u admin -p admin -H <EneaEdgeManagement IP address> -n \ -<uCPE Device name> -s <Test suite> -d <description> - - The Test suite must be one from any of the - following: 128tDeploy.json, - 128tCleanup.json, - 128t_FG_SFCDeploy.json, or - 128t_FG_SFCCleanup.json. - \ No newline at end of file diff --git a/doc/book-enea-edge-example-usecases/doc/appendix_5.xml b/doc/book-enea-edge-example-usecases/doc/appendix_5.xml deleted file mode 100644 index ac09bda..0000000 --- a/doc/book-enea-edge-example-usecases/doc/appendix_5.xml +++ /dev/null @@ -1,244 +0,0 @@ - - - Example Tests Results using the Automation Framework and Test - Harness - - In order to run the following example use-cases, certain configuration - file entries need to be modified according to the network setup that it will - be used, for more details see the previous appendix: - - - - uCPE Device name: inteld1521-17 - - - - address: 172.24.8.62 - - - - version: 2.2.3 - - - - deviceId: inteld1521-17 - - - - > cat lab_config/trgt-1/target.json -{ - "name": "inteld1521-17", - "deviceGroupingTags": " ", - "description": "trgt", - "address": "172.24.8.62", - "port": "830", - "username": "root", - "password": "", - "certificate": null, - "passphrase": null, - "callHome": "false", - "maintMode": "false", - "version": "2.2.3", - "deviceId": "inteld1521-17" -}The IP address of Enea Edge Management application that will - be used in these examples is 172.24.3.92. - - The FortiGate and 128T VNF images need to be copied into the - vnf_image directory. The names should be the same as - those described in the previous appendix. - - The FortiGate valid license file needs to be copied into the - vnf_config/fortigate/ directory. The name should be the - same as that described in the previous appendix. - - The cloud init files that match the network, need to be copied into - the vnf_config/fortigate/ and the - vnf_config/128t/ directories respectively. The names - should be the same as those described in the previous appendix. - -
- Use-case 1: 128T VNF Router Example Use-case - - > cd automation_framework/unittestSuite/ -> python unittestSuite.py -u admin -p admin -H 172.24.3.92 -n inteld1521-17 \ --s 128tDeploy.json -d "128T Deployment" - -Running 128T Deployment... - -test 001: Wait VCPE Agent device be up (__main__.UnittestSuite) ... -2020-08-26 10:10:05,517 - INFO: Wait uCPE device -2020-08-26 10:10:36,650 - INFO: Status: Connected -2020-08-26 10:10:36,651 - INFO: Done -ok -test 002: Bind NIC to DPDK for LAN connection (__main__.UnittestSuite) ... -2020-08-26 10:10:36,686 - INFO: Bind NIC -2020-08-26 10:10:37,788 - INFO: Done -ok -test 003: Creating ibm bridge (__main__.UnittestSuite) ... -2020-08-26 10:10:37,818 - INFO: New OVS network bridge -2020-08-26 10:10:58,762 - INFO: Done -ok -test 004: Creating VNF Management bridge (__main__.UnittestSuite) ... -2020-08-26 10:10:58,794 - INFO: New OVS network bridge -2020-08-26 10:10:58,977 - INFO: Done -ok -test 005: Creating LAN bridge and attaching lan interface to the bridge \ -(__main__.UnittestSuite) ... -2020-08-26 10:10:59,003 - INFO: New OVS network bridge -2020-08-26 10:10:59,334 - INFO: Done -ok -test 006: Onboarding 128T VNF (wizard API) (__main__.UnittestSuite) ... -2020-08-26 10:10:59,370 - INFO: Onboard wizard -2020-08-26 10:13:55,775 - INFO: Done -ok -test 007: Instantiate 128T VNF (__main__.UnittestSuite) ... -2020-08-26 10:13:55,813 - INFO: Instantiate VNF -2020-08-26 10:14:56,583 - INFO: Done -ok - ----------------------------------------------------------------------- -Ran 7 tests in 291.103s - -OK - -> python unittestSuite.py -u admin -p admin -H 172.24.3.92 -n inteld1521-17 \ --s 128tCleanup.json -d "128T Cleanup" - -Running 128T Cleanup... - -test 001: Destroying 128T VNF (__main__.UnittestSuite) ... -2020-08-26 10:15:28,395 - INFO: Destroy VNF -2020-08-26 10:15:29,452 - INFO: Done -ok -test 002: Deleting network bridge LAN (__main__.UnittestSuite) ... -2020-08-26 10:15:29,493 - INFO: Delete OVS network bridge -2020-08-26 10:15:29,734 - INFO: Done -ok -test 003: Deleting VNF management bridge (__main__.UnittestSuite) ... -2020-08-26 10:15:29,765 - INFO: Delete OVS network bridge -2020-08-26 10:15:30,080 - INFO: Done -ok -test 004: Deleting ibm(In Band Management) bridge (__main__.UnittestSuite) ... -2020-08-26 10:15:30,110 - INFO: Delete OVS network bridge -2020-08-26 10:15:46,907 - INFO: Done -ok -test 005: Unbind LAN NIC from DPDK target (__main__.UnittestSuite) ... -2020-08-26 10:15:46,967 - INFO: Unbind NIC -2020-08-26 10:15:48,489 - INFO: Done -ok -test 006: Offboarding 128t VNF (__main__.UnittestSuite) ... -2020-08-26 10:15:48,531 - INFO: Offboard VNF -2020-08-26 10:15:49,171 - INFO: Done -ok - ----------------------------------------------------------------------- -Ran 6 tests in 20.808s - -OK -
- -
- Use-case 2: Service Chaining 128T - Fortigate Example - Use-case - - > python unittestSuite.py -u admin -p admin -H 172.24.3.92 -n inteld1521-17 \ --s 128t_FG_SFCDeploy.json -d "128T - Fortigate SFC Deployment" - -Running 128T - Fortigate SFC Deployment... - -test 001: Wait VCPE Agent device be up (__main__.UnittestSuite) ... -2020-08-26 10:17:29,361 - INFO: Wait uCPE device -2020-08-26 10:18:00,473 - INFO: Status: Connected -2020-08-26 10:18:00,474 - INFO: Done -ok -test 002: Bind NIC to DPDK for LAN connection (__main__.UnittestSuite) ... -2020-08-26 10:18:00,634 - INFO: Bind NIC -2020-08-26 10:18:01,805 - INFO: Done -ok -test 003: Creating ibm bridge (__main__.UnittestSuite) ... -2020-08-26 10:18:01,863 - INFO: New OVS network bridge -2020-08-26 10:18:30,640 - INFO: Done -ok -test 004: Creating VNF Management bridge (__main__.UnittestSuite) ... -2020-08-26 10:18:30,670 - INFO: New OVS network bridge -2020-08-26 10:18:30,876 - INFO: Done -ok -test 005: Creating LAN bridge and attaching lan interface to the bridge \ -(__main__.UnittestSuite) ... -2020-08-26 10:18:30,908 - INFO: New OVS network bridge -2020-08-26 10:18:31,243 - INFO: Done -ok -test 006: Creating SFC(service function chaining) bridge (__main__.UnittestSuite) ... -2020-08-26 10:18:31,273 - INFO: New OVS network bridge -2020-08-26 10:18:31,416 - INFO: Done -ok -test 007: Onboarding 128T VNF (wizard API) (__main__.UnittestSuite) ... -2020-08-26 10:18:31,448 - INFO: Onboard wizard -2020-08-26 10:21:21,569 - INFO: Done -ok -test 008: Onboarding Fortigate VNF (wizard API) (__main__.UnittestSuite) ... -2020-08-26 10:21:21,608 - INFO: Onboard wizard -2020-08-26 10:21:27,199 - INFO: Done -ok -test 009: Instantiate 128T VNF (__main__.UnittestSuite) ... -2020-08-26 10:21:27,226 - INFO: Instantiate VNF -2020-08-26 10:22:27,067 - INFO: Done -ok -test 010: Instantiate Fortigate VNF (__main__.UnittestSuite) ... -2020-08-26 10:22:27,121 - INFO: Instantiate VNF -2020-08-26 10:22:31,310 - INFO: Done -ok - ----------------------------------------------------------------------- -Ran 10 tests in 301.989s - -OK - -> python unittestSuite.py -u admin -p admin -H 172.24.3.92 -n inteld1521-17 \ --s 128t_FG_SFCCleanup.json -d "128T - Fortigate SFC Cleanup" - -Running 128T - Fortigate SFC Cleanup... - -test 001: Destroying Fortigate VNF (__main__.UnittestSuite) ... -2020-08-26 10:23:29,308 - INFO: Destroy VNF -2020-08-26 10:23:30,026 - INFO: Done -ok -test 002: Destroying 128T VNF (__main__.UnittestSuite) ... -2020-08-26 10:23:30,065 - INFO: Destroy VNF -2020-08-26 10:23:30,917 - INFO: Done -ok -test 003: Deleting network bridge SFC (__main__.UnittestSuite) ... -2020-08-26 10:23:30,960 - INFO: Delete OVS network bridge -2020-08-26 10:23:31,123 - INFO: Done -ok -test 004: Deleting network bridge LAN (__main__.UnittestSuite) ... -2020-08-26 10:23:31,156 - INFO: Delete OVS network bridge -2020-08-26 10:23:31,381 - INFO: Done -ok -test 005: Deleting VNF management bridge (__main__.UnittestSuite) ... -2020-08-26 10:23:31,412 - INFO: Delete OVS network bridge -2020-08-26 10:23:31,596 - INFO: Done -ok -test 006: Deleting ibm(In Band Management) bridge (__main__.UnittestSuite) ... -2020-08-26 10:23:31,621 - INFO: Delete OVS network bridge -2020-08-26 10:23:47,980 - INFO: Done -ok -test 007: Unbind LAN NIC from DPDK target (__main__.UnittestSuite) ... -2020-08-26 10:23:48,019 - INFO: Unbind NIC -2020-08-26 10:23:49,547 - INFO: Done -ok -test 008: Offboarding 128t VNF (__main__.UnittestSuite) ... -2020-08-26 10:23:49,575 - INFO: Offboard VNF -2020-08-26 10:23:50,252 - INFO: Done -ok -test 009: Offboarding Fortigate VNF (__main__.UnittestSuite) ... -2020-08-26 10:23:50,295 - INFO: Offboard VNF -2020-08-26 10:23:50,589 - INFO: Done -ok - ----------------------------------------------------------------------- -Ran 9 tests in 21.326s - -OK -
-
\ No newline at end of file diff --git a/doc/book-enea-edge-example-usecases/doc/book.xml b/doc/book-enea-edge-example-usecases/doc/book.xml index e872c95..5e7a3f4 100644 --- a/doc/book-enea-edge-example-usecases/doc/book.xml +++ b/doc/book-enea-edge-example-usecases/doc/book.xml @@ -37,8 +37,4 @@ - - - - \ No newline at end of file -- cgit v1.2.3-54-g00ecf