From 02e257b9f83f4919f21c898f7d15ef20bdfd19a7 Mon Sep 17 00:00:00 2001 From: Miruna Paun Date: Mon, 2 Oct 2017 19:27:25 +0200 Subject: Added another round of edits from Cip USERDOCAP-240 Signed-off-by: Miruna Paun --- book-enea-nfv-core-installation-guide/doc/book.xml | 2 +- .../doc/high_availability.xml | 28 ++-- .../doc/images/dns.png | Bin 11639 -> 10997 bytes .../doc/images/dns.svg | 2 +- .../doc/installation_instructions.xml | 169 ++++++++------------- .../doc/post_deploy_config.xml | 119 --------------- 6 files changed, 79 insertions(+), 241 deletions(-) delete mode 100644 book-enea-nfv-core-installation-guide/doc/post_deploy_config.xml diff --git a/book-enea-nfv-core-installation-guide/doc/book.xml b/book-enea-nfv-core-installation-guide/doc/book.xml index 62d46f6..fcba092 100644 --- a/book-enea-nfv-core-installation-guide/doc/book.xml +++ b/book-enea-nfv-core-installation-guide/doc/book.xml @@ -10,7 +10,7 @@ - + diff --git a/book-enea-nfv-core-installation-guide/doc/high_availability.xml b/book-enea-nfv-core-installation-guide/doc/high_availability.xml index 4fe02fe..6d1a9c7 100644 --- a/book-enea-nfv-core-installation-guide/doc/high_availability.xml +++ b/book-enea-nfv-core-installation-guide/doc/high_availability.xml @@ -305,7 +305,7 @@ The Zabbix configuration dashboard is available at the same IP address where OpenStack can be reached, e.g. - http://<vip__zbx_vip_mgmt>/zabbix. + http://10.0.6.42/zabbix. To forward zabbix events to Vitrage, a new media script needs to be created and associated with a user. Follow the steps below as a @@ -550,8 +550,8 @@ root@node-6:~# systemctl restart vitrage-graph Pacemaker High Availability Many of the OpenStack solutions which offer High Availability - characteristics employ pacemaker for achieving highly available OpenStack - services. Traditionally pacemaker has been used for managing only the + characteristics employ Pacemaker for achieving highly available OpenStack + services. Traditionally Pacemaker has been used for managing only the control plane services, so it can effectively provide redundancy and recovery for the Controller nodes only. A reason for this is that Controller nodes and Compute nodes essentially have very different High @@ -572,9 +572,9 @@ root@node-6:~# systemctl restart vitrage-graph understood and experimented with, and the basis for this is Pacemaker using Corosync underneath. - Extending the use of pacemaker to Compute nodes was thought as a + Extending the use of Pacemaker to Compute nodes was thought as a possible solution for providing VNF high availability, but the problem - turned out to be more complicated. On one hand, pacemaker as a clustering + turned out to be more complicated. On one hand, Pacemaker as a clustering tool, can only scale properly up to a limited number of nodes, usually less than 128. This poses a problem for large scale deployments where hundreds of compute nodes are required. On the other hand, Compute node @@ -584,20 +584,20 @@ root@node-6:~# systemctl restart vitrage-graph
Pacemaker Remote - As mentioned earlier, pacemaker and corosync do not scale well + As mentioned earlier, Pacemaker and corosync do not scale well over a large cluster, since each node has to talk to every other, essentially creating a mesh configuration. A solution to this problem could be partitioning the cluster into smaller groups, but this has its limitations and it is generally difficult to manage. A better solution is using pacemaker-remote, a - feature of pacemaker, which allows for extending the cluster beyond the - usual limits by using the pacemaker monitoring capabilities. It + feature of Pacemaker, which allows for extending the cluster beyond the + usual limits by using the Pacemaker monitoring capabilities. It essentially creates a new type of resource which enables adding light weight nodes to the cluster. More information about pacemaker-remote can be found on the official clusterlabs website. - Please note that at this moment pacemaker remote must be + Please note that at this moment Pacemaker remote must be configured manually after deployment. Here are the manual steps for doing so: @@ -629,7 +629,7 @@ controller, vitrage | | 1 | 1 - Each controller has a unique pacemaker authkey. One needs to + Each controller has a unique Pacemaker authkey. One needs to be kept and propagated to the other servers. Assuming node-1, node-2 and node-3 are the controllers, execute the following from the Fuel console: @@ -711,7 +711,7 @@ RemoteOnline: [ node-4.domain.tld node-5.domain.tld ] Pacemaker Fencing ENEA NFV Core 1.0 makes use of the fencing capabilities of - pacemaker to isolate faulty nodes and trigger recovery actions by means + Pacemaker to isolate faulty nodes and trigger recovery actions by means of power cycling the failed nodes. Fencing is configured by creating STONITH type resources for each of the servers in the cluster, both Controller nodes and Compute nodes. The @@ -756,7 +756,7 @@ controller, vitrage | | 1 | 1 - Configure pacemaker fencing resources. This needs to be done + Configure Pacemaker fencing resources. This needs to be done once on one of the controllers. The parameters will vary, depending on the BMC addresses of each node and credentials. @@ -779,7 +779,7 @@ ipaddr=10.0.100.155 login=ADMIN passwd=ADMIN op monitor interval="60s" Activate fencing by enabling the stonith - property in pacemaker (disabled by default). This also needs to be + property in Pacemaker (disabled by default). This also needs to be done only once, on one of the controllers. [root@node-1:~]# pcs property set stonith-enabled=true @@ -805,7 +805,7 @@ ipaddr=10.0.100.155 login=ADMIN passwd=ADMIN op monitor interval="60s"The work for Compute node High Availability is captured in an OpenStack user story and documented upstream, showing proposed solutions, summit talks and presentations. A number of these solutions make use of - OpenStack Resource Agents, which are a set of specialized pacemaker + OpenStack Resource Agents, which are a set of specialized Pacemaker resources capable of identifying failures in compute nodes and can perform automatic evacuation of the instances affected by these failures. diff --git a/book-enea-nfv-core-installation-guide/doc/images/dns.png b/book-enea-nfv-core-installation-guide/doc/images/dns.png index f6467bb..b3b08ba 100644 Binary files a/book-enea-nfv-core-installation-guide/doc/images/dns.png and b/book-enea-nfv-core-installation-guide/doc/images/dns.png differ diff --git a/book-enea-nfv-core-installation-guide/doc/images/dns.svg b/book-enea-nfv-core-installation-guide/doc/images/dns.svg index 1da972e..a7d3459 100644 --- a/book-enea-nfv-core-installation-guide/doc/images/dns.svg +++ b/book-enea-nfv-core-installation-guide/doc/images/dns.svg @@ -1,3 +1,3 @@ - \ No newline at end of file + \ No newline at end of file diff --git a/book-enea-nfv-core-installation-guide/doc/installation_instructions.xml b/book-enea-nfv-core-installation-guide/doc/installation_instructions.xml index 2b89490..fe93e1a 100644 --- a/book-enea-nfv-core-installation-guide/doc/installation_instructions.xml +++ b/book-enea-nfv-core-installation-guide/doc/installation_instructions.xml @@ -19,8 +19,9 @@ Armband project is out of the scope of this document but information is available online on the OPNFV wiki. - The OPNFV download page provides general instructions for building and - installing the Fuel Installer .iso and also on how to deploy OPNFV Danube + The OPNFV + download page provides general instructions for building and + installing the Fuel Installer ISO and also on how to deploy OPNFV Danube using Fuel on a Pharos compliant test lab. @@ -43,8 +44,8 @@
Other Preparations - Reading the following addition and optional documents aides in - familiarizing yourself with Fuel: + Reading the following documents aides in familiarizing yourself with + Fuel: @@ -62,13 +63,13 @@ Fuel - Developer Guide + Developer Guide (optional) Fuel - Plugin Developers Guide + Plugin Developers Guide (optional) @@ -101,7 +102,8 @@ - Network overlay you plan to deploy (VLAN, VXLAN, FLAT) + Network overlay planned for deployment (VLAN, VXLAN, FLAT). Only + VLAN is supported in this release. @@ -306,7 +308,7 @@ - + @@ -331,23 +333,14 @@ - Enable Experimental features: - - - - In the Feature groups section, enable the - checkbox for Experimental features. - - - - Move to the <Apply> button and press - <Enter> - - + In the Feature groups section, enable the + checkbox for Experimental features. Move to the + <Apply> button and press <Enter> - + @@ -385,30 +378,27 @@ Enable PXE booting: For every controller and compute server, enable PXE Booting as - the first boot device in the UEFI (EDK2) boot order menu, with the - hard disk as the second boot device in the same menu. + the first boot device in the BIOS boot menu (for x86) or UEFI boot + order menu (for aarch64). Reboot all the control and compute blades. + + Connect to the FUEL UI via the URL provided in the Console + (default: https://10.20.0.2:8443) + + Wait for the availability of nodes to appear in the Fuel GUI. + - - - Connect to the FUEL UI via the URL provided in the Console - (default: https://10.20.0.2:8443) - - - - Wait until all nodes are displayed in top right corner of - the Fuel GUI: Total nodes and Unallocated nodes (see figure - below). - - + + Wait until all nodes are displayed in top right corner of the + Fuel GUI: Total nodes and Unallocated nodes (see figure below). @@ -441,6 +431,10 @@ Tacker VNF Manager + + + KVM For NFV Plugin + Login to the Fuel Master via ssh using the @@ -449,7 +443,8 @@ $ fuel plugins --install /opt/opnfv/vitrage-1.0-1.0.4-1.noarch.rpm $ fuel plugins --install zabbix_monitoring-2.5-2.5.3-1.noarch.rpm -$ fuel plugins --install tacker-1.0-1.0.0-1.noarch.rpm +$ fuel plugins --install tacker-1.0-1.0.0-1.noarch.rpm +$ fuel plugins --install /opt/opnfv/fuel-plugin-kvm-1.0-1.0.0-1.noarch.rpm Expected output: Plugin ....... was successfully installed.
@@ -472,8 +467,10 @@ $ fuel plugins --install tacker-1.0-1.0.0-1.noarch.rpm - Select ”aarch64 or x86_64” and press - <Next> + Only Debian 9 is supported in this release. Select + Newton on Debian 9 (x86_64) or Newton + on Debian 9 (aarch64) depending on your + configuration: @@ -489,23 +486,10 @@ $ fuel plugins --install tacker-1.0-1.0.0-1.noarch.rpm - Select network mode: - - - - Select Neutron with ML2 plugin - - - - Select Neutron with VLAN segmentation - (needed when enabling DPDK). VXLAN is available but not - supported. - - - - Press [Next] - - + Select Neutron With VLAN segmenation. + Neutron with tunneling segmentation is available + but not supported in this release. DPDK scenarios only work with VLAN + segmentation. @@ -516,7 +500,12 @@ $ fuel plugins --install tacker-1.0-1.0.0-1.noarch.rpm Select Storage Back-ends, then Ceph - for block storage and press [Next] + for block storage.
+ + Ceph for Image Storage, + Object storage and Ephemeral + storage have not been validated for this release. It is + advisable to only use the option mentioned above. @@ -548,8 +537,8 @@ $ fuel plugins --install tacker-1.0-1.0.0-1.noarch.rpm
Configure the Network Environment - To configure the network environment specifically to a DPDK based - scenario, please follow these steps: + To configure the network environment, please follow these + steps: @@ -654,34 +643,6 @@ $ fuel plugins --install tacker-1.0-1.0.0-1.noarch.rpm - - Update the Private Network information: - - - - It is recommended to keep the default CIDR - - - - Set IP Range Start to an appropriate value (default - 192.168.2.1) - - - - Set IP Range End to an appropriate value (default - 192.168.2.254) - - - - Check <VLAN tagging> - - - - Set an appropriate VLAN tag (default 103) - - - - Select the Neutron L3 Node Networks group on the left pane: @@ -780,24 +741,19 @@ $ fuel plugins --install tacker-1.0-1.0.0-1.noarch.rpm
Adding/Removing Repositories - Fuel by default, uses a set of repositories as package sources, that - hold both OpenStack components as well as other needed packages. In order - to speed up the deployment process, Fuel will create two mirrors. The - first, a local mirror, reachable on the Admin interface (e.g. - 10.20.0.2:8080/newton-10.0/ubuntu/x86-64), will add - additional repositories that need external connections. The second, a - debian testing main: - http://10.20.0.2:8080/mirrors/debian, requires no other - repositories to be added that need external connections, having only (even - for offline): debian-testing-local and - mos. - - It is possible to avoid using external repositories and make the - entire process completely offline. In this way only the most basic - packages will be installed, but the process will be more efficient and not - depend on an Internet connection. To do this, just make sure that the - Repositories list contains only ubuntu-local, - mos and Auxilliary. + Enea NFV Core has been validated for complete offline deployment. To + this end, two repositories are defined and used. The first, + debian-testing-local (deb + http://10.20.0.2:8080/mirrors/debian testing main), contains a + snapshot of the Debian base OS, while the second, mos, + (deb http://10.20.0.2:8080/newton-10.0/ubuntu/x86_64 mos10.0 main + restricted), stores the Enea NFV Core specific Openstack and + Openstack related packages. + + These repositories provide only the minimum necessary packages, but + it is possible to add extra repositories as needed. It is recommended + however, that the first deployment be performed without extra + repositories. @@ -827,7 +783,8 @@ $ fuel plugins --install tacker-1.0-1.0.0-1.noarch.rpm In the FUEL UI of your Environment, click the Settings tab and select OpenStack Services on the left side pane, make sure Tacker is NOT enabled - and save your settings: + and save your settings. Tacker functionality will be enabled after + deployment is performed. diff --git a/book-enea-nfv-core-installation-guide/doc/post_deploy_config.xml b/book-enea-nfv-core-installation-guide/doc/post_deploy_config.xml deleted file mode 100644 index 9875f58..0000000 --- a/book-enea-nfv-core-installation-guide/doc/post_deploy_config.xml +++ /dev/null @@ -1,119 +0,0 @@ - - - Post-Deploy Configurations - - For running DPDK applications it is useful to isolate the available - CPUs between the Linux kernel, ovs-dpdk and - nova-compute. - - All of the Hardware nodes can be accessed through - ssh from the Fuel console. Simply create an - ssh connection to Fuel (e.g. root@10.20.0.2 pwd: r00tme) - and run the following command to get a list of the servers and the IPs where - they can be reached. - - [root@fuel ~]# fuel node -id | status | name | cluster | ip | mac | roles / - | pending_roles | online | group_id ----+--------+------------------+---------+-----------+-------------------+----------/ ------------------+---------------+--------+--------- - 4 | ready | Untitled (8c:c2) | 1 | 10.20.0.6 | 68:05:ca:46:8c:c2 | ceph-osd,/ - compute | | 1 | 1 - 2 | ready | Untitled (8c:45) | 1 | 10.20.0.5 | 68:05:ca:46:8c:45 | controller,/ - mongo, tacker | | 1 | 1 - 1 | ready | Untitled (8c:d4) | 1 | 10.20.0.4 | 68:05:ca:46:8c:d4 | ceph-osd,/ - controller | | 1 | 1 - 5 | ready | Untitled (8c:c9) | 1 | 10.20.0.7 | 68:05:ca:46:8c:c9 | ceph-osd,/ - compute | | 1 | 1 - 3 | ready | Untitled (8b:64) | 1 | 10.20.0.3 | 68:05:ca:46:8b:64 | controller,/ - vitrage | | 1 | 1 -[root@fuel ~]# | | 1 | 2 -[root@fuel ~]# ssh node-3 -Warning: Permanently added 'node-3' (ECDSA) to the list of known hosts. - -The programs included with the Debian GNU/Linux system are free software; -the exact distribution terms for each program are described in the -individual files in /usr/share/doc/*/copyright. - -Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent -permitted by applicable law. -Last login: Thu Aug 24 19:40:06 2017 from 10.20.0.2 -root@node-3:~# - -
- CPU isolation configuration - - It is a good idea to isolate the cores that will perform packet - processing and run QEMU. The example below shows how to set - isolcpus on a compute node that has one Intel Xeon - processor E5-2660 v4, 14 cores, and 28 hyper-threaded cores. - - root@node-3:~# cat /etc/default/grub | head -n 10 -# If you change this file, run 'update-grub' afterwards to update -# /boot/grub/grub.cfg. -# For full documentation of the options in this file, see: -# info -f grub -n 'Simple configuration' - -GRUB_DEFAULT="Advanced options for Ubuntu, with Linux 4.4.50-rt62nfv" -GRUB_TIMEOUT=10 -GRUB_DISTRIBUTOR='lsb_release -i -s 2 /dev/null || echo Debian' -GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" -GRUB_CMDLINE_LINUX=" console=tty0 net.ifnames=1 biosdevname=0 rootdelay=90 / -nomodeset hugepagesz=2M hugepages=1536 isolcpus=10-47,58-95" -root@node-6:~# update-grub -Generating grub configuration file ... -Found linux image: /boot/vmlinuz- 4.10.0-9924-generic -Found initrd image: /boot/initrd.img- 4.10.0-9924-generic -done -root@node-3:~# reboot -Connection to node-3 closed by remote host. -Connection to node-3 closed. -
- -
- Nova Compute configurations - - In order to isolate the OpenStack instances on dedicated CPUs, nova - must be configured with vcpu_pin_set. Please refer to - the Nova configuration guide for more information. - - The example below applies again, to an Intel Xeon processor E5-2660 - v4. Here the vcpu_pin_set is configured so that a pair - of thread siblings are chosen. - - root@node-3:~# cat /etc/nova/nova.conf | grep vcpu_pin_set -vcpu_pin_set = "16-47,64-95" -root@node-3:~# - - After modifying Nova configuration options on the Compute nodes, it - is necessary to restart nova-compute to put them into - effect. - - root@node-3:~# systemctl restart nova-compute -root@node-3:~# -
- -
- OpenvSwitch with DPDK configuration - - Enea NFV Core 1.0 comes with OpenvSwitch as the virtual switch - option. In the selected scenario, OpenvSwitch also uses DPDK for passing - traffic to and from the VMs. One of the features that comes with - OpenvSwitch v2.7.0 is the ability to set pmd-cpu-mask. - This effectively isolates userspace PMD (poll-mode-drivers) on the - specified set of CPUs. - - By default, the OpenvSwitch that comes installed on the compute - nodes has no pmd-cpu-mask. There is an option to set it - from the Fuel menu before deploy, but it can always be manually set - post-deploy as follows: - - root@node-3:~# ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=7e0 -root@node-3:~# ovs-vsctl get Open_vSwitch . other_config:pmd-cpu-mask -"7e0" -root@node-3:~# - - No restart is required, OpenvSwitch automatically spawns new pmd - threads and sets the affinity as necessary. -
-
\ No newline at end of file -- cgit v1.2.3-54-g00ecf