Selasa, 17 Januari 2017

MOS 8 Full

HowTo Install Mirantis OpenStack 8.0 with Mellanox ConnectX-3 Pro Adapters Support (Ethernet Network + VLAN Segmentation)

Version 33
This post shows how to set up and configure Mirantis Fuel 8 (OpenStack Liberty based on Ubuntu 14.04) to support Mellanox ConnectX-3/ConnectX-3 Pro adapters. This procedure enables SR-IOV mode for the VMs on the compute nodes and iSER transport mode for the storage nodes.


Before reading this post, make sure you are familiar with Mirantis Fuel 8.0 installation procedures.


Setup Diagram



Note: Besides the Deployment node, all nodes should be connected to all five networks.
Note: Server’s IPMI and the switches management interfaces wiring and configuration are out of scope. You need to ensure that there is management access (SSH) to Mellanox switch SX1710 to perform the configuration.

Setup Hardware Requirements (Example)

ComponentQuantityDescription
Deployment Node1
DELL PowerEdge R620
  • CPU: 2 x E5-2650 @ 2.00GHz
  • MEM: 128 GB
  • HD: 2 x 900GB SAS 10k in RAID-1
Cloud Controllers and Compute servers:
  • 3 x Controllers
  • 3 x Computes
6HP DL360p G8
  • CPU: 2 x E5-2660 v2 @ 2.20GHz
  • MEM: 128 GB
  • HD: 2 x 450GB 10k SAS (RAID-1)
  • NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
Cloud Storage Server1Supermicro X9DR3-F
  • CPU: 2 x E5-2650 @ 2.00GHz
  • MEM: 128 GB
  • HD: 24 x 6Gb/s SATA Intel SSD DC S3500 Series 480GB (SSDSC2BB480G4)
  • RAID Ctrl: LSI Logic MegaRAID SAS 2208 with battery
  • NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
Admin (PXE) and Public switch11Gb switch with VLANs configured to support both networks
Ethernet Switch1Mellanox SX1710 VPI 36 ports 56Gb/s switch configured in Ethernet mode.
Cables
16 x 1Gb CAT-6e for Admin (PXE) and Public networks
7 x 56GbE copper cables up to 2m (MC2207130-XXX)


Note: You can use Mellanox ConnectX-3 Pro EN (MCX313A-BCCT) or Mellanox ConnectX-3 Pro VPI (MCX353-FCCT) adapter cards.
Note: Please make sure that the Mellanox switch is set as Ethernet.

Storage Server RAID Setup

  • Two SSD drives in bays 0-1 configured in RAID-1 (Mirror) are used for the OS.
  • Twenty-two SSD drives in bays 3-24 configured in RAID-10 are used as a Cinder volume and will be configured on the RAID drive.
    storage_raid.png

Network Physical Setup

  1. Connect all nodes to the Admin (PXE) 1GbE switch (preferably through the eth0 interface on board). 
        We recommend that you record the MAC address of the Controller and Storage servers to make Cloud installation easier (see Controller Node section below in Nodes tab).
    Note: All cloud servers should be configured to run PXE boot over the Admin (PXE) network.
  2. Connect all nodes to the Public 1GbE switch (preferably through the eth1 interface on board).
  3. Connect Port #1 of the ConnectX-3 Pro to the SX1710 Ethernet switch (Private, Management, and Storage networks).
    Note: The interface names (eth0, eth1, p2p1, etc.) may vary between servers from different vendors.
    Rack Setup Example

    Deployment Node

    Compute and Controller Nodes
    controller-compute.png
    Storage Node
    The configuration is the same as it is for the Compute and Controller nodes.Storage.png
  4. Configure the required VLANs and enable flow control on the Ethernet switch ports.All related VLANs should be enabled on the 56GbE switch (Private, Management, Storage networks).On Mellanox switches, use the command flow below to enable VLANs (e.g. VLAN 1-100 on all ports).
    Note: Refer to the MLNX-OS User Manual to get familiar with switch software (located atsupport.mellanox.com).
    Note: Before starting use of the Mellanox switch, it is recommended to upgrade the switch to the latest MLNX-OS version.
    switch > enable
    switch # configure terminal
    switch (config) # vlan 1-100
    switch (config vlan 1-100) # exit
    switch (config) # interface ethernet 1/1 switchport mode hybrid
    switch (config) # interface ethernet 1/1 switchport hybrid allowed-vlan all
    switch (config) # interface ethernet 1/2 switchport mode hybrid
    switch (config) # interface ethernet 1/2 switchport hybrid allowed-vlan all
    ...

    switch (config) # interface ethernet 1/36 switchport mode hybrid
    switch (config) # interface ethernet 1/36 switchport hybrid allowed-vlan all
    Flow control is required when running iSER (RDMA over RoCE - Ethernet).
    On Mellanox switches, run the following command to enable flow control on the switches (on all ports in this example):
    switch (config) # interface ethernet 1/1-1/36 flowcontrol receive on force
    switch (config) # interface ethernet 1/1-1/36 flowcontrol send on force
    To save the configuration (permanently), run:
    switch (config) # configuration write

Networks Allocation (Example)

The example in this post is based on the network allocation defined in this table:
NetworkSubnet/MaskGatewayNotes
Admin (PXE)10.20.0.0/24N/AThe network is used to provision and manage cloud nodes by the Fuel Master. The network is enclosed within a 1Gb switch and has no routing outside. 10.20.0.0/24 is the default Admin (PXE) subnet and we use it with no changes.
Management192.168.0.0/24N/AThis is the Cloud Management network. The network uses VLAN 2 in SX1710 over 56Gb interconnect. 192.168.0.0/24 is the default Management subnet and we use it with no changes.
Storage192.168.1.0/24N/AThis network is used to provide storage services. The network uses VLAN 3 in SX1710 over 56Gb interconnect. 192.168.1.0/24 is the default Storage subnet and we use it with no changes.
Public and Neutron L310.7.208.0/2410.7.208.1
Public network is used to connect Cloud nodes to an external network.
Neutron L3 is used to provide Floating IP for tenant VMs.
Both networks are represented by IP ranges within same subnet with routing to external networks.

All Cloud nodes will have Public IP address. In additional you shall allocate 2 more Public IP addressees ·
  • One IP required for HA functionality ·
  • Virtual router requires additional Public IP address.
We do not use virtual router in our deployment but still need to reserve Public IP address for it.  So Public Network range is an amount of cloud nodes + 2. For our example with 7 Cloud nodes we need 9 IPs in Public network range.
Note: Consider a larger range if you are planning to add more servers to the cloud later
In our build we will use 10.7.208.53 >> 10.7.208.76 IP range for both Public and Neutron L3.
IP allocation will be as follows:
  • Deployment node: 10.7.208.53
  • Fuel Master IP: 10.7.208.54
  • Public Range: 10.7.208.55 >> 10.7.208.63 (7 used for physical servers, 1 reserved for HA and 1 reserved for virtual router)
  • Neutron L3 Range: 10.144.254.64 >> 10.144.254.76 (used for Floating IP pool)

The following scheme illustrates the public IP range allocation for the Fuel VM, all setup nodes and Neutron L3 floating IP range.

Install the Deployment Server

In our setup we install CentOS version 7.2, 64-bit distribution. We used the CentOS-7-x86_64-Minimal-1511.iso image and installed the minimal configuration. We will install all missing packages later.
The two 1Gb interfaces are connected to Admin(PXE) and Public networks:
    • em1 (1st interface) connected to Admin (PXE) and configured statically. The configuration will not be actually used but will save time on bridge creation later
      • IP: 10.20.0.1
      • Netmask: 255.255.255.0
      • Gateway: N/A
    • em2 (2nd interface) connected to Public and configured statically:
      • IP: 10.7.208.53
      • Netmask: 255.255.255.0
      • Gateway: 10.7.208.1

Configure the Deployment Node for Running the Fuel VM

Login to Deployment node by SSH or locally and perform actions listed below:
  1. Disable the Network Manager.
    # sudo systemctl stop NetworkManager.service
    # sudo systemctl disable NetworkManager.service
  2. Install packages required for virtualization.
    # sudo yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install virt-manager
  3. Install packages required for x-server.
    # sudo yum install xorg-x11-xauth xorg-x11-server-utils xclock
  4. Reboot the Deployment server.

Create and Configure the new VM To Run the Fuel Master Installation

  1. Start the VM. 
    # virt-manager
  2. Create a new VM using the virt-manager wizard.
  3. During the creation process provide VM with four cores, 8GB of RAM and 200GB disk.
    Note: For details see the Fuel Master node hardware requirements section in Mirantis OpenStack Planning Guide — Mirantis OpenStack v8.0 | Documentation.
  4. Mount the Fuel installation disk to VM virtual CD-ROM device.
  5. Configure the network so the Fuel VM will have two NICs connected to Admin (PXE) and Public networks.
    1. Use virt-manager to create bridges.
      1. br-em1, the Admin (PXE) network bridge used to connect Fuel VM's eth0 to Admin (PXE) network.
      2. br-em2, the Public network bridge used to connect Fuel VM's eth1 to Public network.
    2. Connect the Fuel VM eth0 interface to br-em1.
    3. Add to the Fuel VM eth1 network interface and connect it to br-em2.
      Note: You can define any other names for bridges. In this example names were defined to match names of the connected physical interfaces of the Deployment node.
  6. Save settings and start VM.

Install the Fuel Master from an ISO Image

Note: Avoid starting the other nodes except for the fuel master until Mellanox plugin is installed.
  1. Boot Fuel Master Server from the ISO image as a virtual DVD (click here to download ISO image).
  2. Choose option 1. and press the TAB button to edit default options:

    a. Remove the default gateway (10.20.0.1).
    b. Change the DNS to 10.20.0.2 (the Fuel VM IP).
    c. Add the following command to the end: "showmenu=yes"
    The tuned boot parameters should look like this:
    Note: Do not try to change eth0 to another interface or the deployment might fail.
  3. Fuel VM will reboot itself after the initial installation is completed and the Fuel menu will appear.
    Note: Ensure that the VM will start from the Local Disk and not CD-ROM. Otherwise you will restart the installation from beginning.
  4. Begin the network setup:
    1. Configure eth0 as the PXE (Admin) network interface.
      Ensure the default Gateway entry is empty for the interface. The network is enclosed within the switch and has no routing outside. 
      Select Apply.
    2. Configure eth1 as the Public network interface.
      The interface is routable to LAN/internet and will be used to access the server. Configure the static IP address, netmask and default gateway on the public network interface. 
      Select Apply.
  5. Set the PXE Setup. The PXE network is enclosed within the switch. Use the default settings.
    Press the Check button to ensure no errors are found.
  6. Set the Time Sync.
    a. Choose the Time Sync option on the left-hand Menu.
    b. Configure the NTP server entries suitable for your infrastructure.
    c. Press Check to verify settings.
  7. Proceed with the installation.
    Navigate to Quit Setup and select Save and Quit.

    Once the Fuel installation is done, you will see Fuel access details both for SSH and HTTP.
  8. Configure the Fuel Master VM SSH server to allow connections from Public network.
    By default Fuel will accept SSH connections from Admin(PXE) network only.
    Follow the below steps to allow connections from Public Network:
    1. Use virt-manager to access Fuel Master VM console
    2. Edit sshd_config: 
      # vi /etc/ssh/sshd_config
    3. Find and comment this line: 
      ListenAddress 10.20.0.2
    4. Restart sshd: 
      # service sshd restart
  9. Access Fuel using one of the following:
  • Web UI by http://10.7.208.54:8000 (use admin/admin as user/password)
  • SSH by connect to 10.7.208.54 (use root/r00tme as user/password)

Install the Mellanox Plugin

Mellanox plugin configures support for Mellanox ConnectX-3 Pro network adapters, enabling high-performance SR-IOV compute traffic networking, iSER (iSCSI) block storage networking which reduces CPU overhead, boosts throughput, reduces latency, and enables network traffic to bypass the software switch layer.

Follow the steps below to install the plugin. For the complete instructions, please refer to: HowTo Install Mellanox OpenStack Plugin for Mirantis Fuel 8.0).
  1. Login to the Fuel Master, download the Mellanox plugin rpm from here and store it on your Fuel Master server.
    [root@fuel ~]# wget http://bgate.mellanox.com/openstack/openstack_plugins/fuel_plugins/8.0/plugins/mellanox-plugin-3.0-3.0.0-1.noarch.rpm
  2. Install plugin from download directory:
    # fuel plugins --install=mellanox-plugin-3.0-3.0.0-1.noarch.rpm
    Note: The Mellanox plugin replaces the current bootstrap image, the original image is backed up in /opt/old_bootstrap_image/.
  3. Verify that the plugin was successfully installed.It should be displayed when running the fuel plugins command.
  4. Create a new Bootstrap image to enable Mellanox hardware detection:
    Run the create_mellanox_vpi_bootstrap command.
    # create_mellanox_vpi_bootstrap
  5. Reboot all nodes, including already-discovered nodes.
    To reboot already-discovered nodes by running the following command:
    # reboot_bootstrap_nodes -a
  6. Run the fuel nodes command to check if there are any already-discovered nodes.


Create a new OpenStack Environment

Open in WEB browser (for example: http://10.7.208.54:8000) and log into Fuel environment usingadmin/admin as the username and password.

1. Open a new environment in the Fuel dashboard. A configuration wizard will start.
2. Configure the new environment wizard as follows:
      • Name and Release
        • Name: SR-IOV
        • Release: Liberty on Ubuntu 14.04
      • Compute
        • QEMU-KVM
      • Network
        • Neutron with VLAN segmentation
      • Storage Backend
        • Block storage: LVM
      • Additional Services
        • None
      • Finish
        • Click Create button.
3. Click on the new environment created and proceed with the environment configuration.

Configure the OpenStack Environment


Settings Tab

  1. Specify that KVM is the hypervisor type.
    KVM is required to enable Mellanox Openstack features.
    Open the Settings tab, select Compute section, and then choose KVM as the hypervisor type. 
  2. Enable the desired Mellanox OpenStack features.
    1. Open the Other section.
    2. Select relevant Mellanox plugin versions if you have multiple versions installed.
    3. Enable SR-IOV support.
      1. Check SR-IOV direct port creation in private VLAN networks (Neutron).
        1. Set the desired number of Virtual NICs.
          Note: Relevant for VLAN segmentation only
          Note: The number of virtual NICs is amount of virtual functions (VFs) that will be available on the Compute node.
          Note: One VF will be utilized for iSER storage transport if you choose to use iSER. In this case you will get one VF less for Virtual Machines.
      2. Select the Support quality of service over VLAN networks and ports (Neutron).
        If selected, Neutron "Quality of service" (QoS) will be enabled for VLAN networks and ports over Mellanox HCAs.
        Note: This feature is supported only if SR-IOV is enabled. First enable SR-IOV and then enable QoS.
      3. Enable the ISCSI Extension over  RDMA (iSER) protocol for volumes (Cinder).
        By enabling this feature you will use the iSER  block storage transport instead or ISCSI. iSER provides improved latency, better bandwidth and reduced CPU overhead.
        Note: A dedicated Virtual Function will be reserved for a storage endpoint and the priority flow control has to be enabled on the switch side port.

Nodes Tab


Server Discovery by Fuel

This section assigns Cloud roles to servers. Servers should be discovered by Fuel. So you must make sure the servers are configured for PXE boot over Admin (PXE) network. When done, reboot the servers and wait for them to be discovered. Discovered nodes will be listed in top right corner of the Fuel dashboard.
Now you can add UNALLOCATED NODES to the setup. 
First you can add Controller, Storage, and then Compute nodes. A description of how to select each follows.

Add Controller Nodes

  1. Click Add Node.
  2. Identify three controller nodes. Use the last four Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. Assign the node's role as a Controller node.
  3. Click the Apply Changes button.

Add the Storage Node

  1. Click Add Node.
  2. Identify your storage node. Use the last four Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. In our example this is the only Supermicro server, so identification by vendor is easy. Specify this node as a Storage - Cinder node.
  3. Click Apply Changes

Add Compute Nodes

  1. Click Add Node.
  2. Select all the nodes that are remaining and specify them as having Compute roles.
  3. Click Apply Changes.

Configure Interfaces

In this step, each network must be mapped to a physical interface for each node. You can choose and configure multiple nodes in parallel.
In case of HW differences between selected nodes (like the number of network ports), bulk configuration is not allowed. If you perform a bulk configuration, the Configure Interfaces button displays an error icon.

The example below allows configuring six nodes in parallel. The seventh node (Supermicro storage node) will be configured separately.

In this example, we set the Admin (PXE) network to eno1 and the Public network to eno2.
The Storage, Private, and Management networks should run on the ConnectX-3 adapters 56GbE port.
Note: In some cases the speed of Mellanox interface can be shown as 10Gb/s.

Click Back To Node List and perform network configuration for Storage Node.

Configure Disks

Note: There is no need to change the defaults for the Controller and Compute nodes unless the changes are required. For the Storage node we recommend that you allocate only high performing RAID for Cinder storage. The small disk will be allocated to Base System.
  1. Select the Storage node.
  2. Press the Disk Configuration button.
  3. Click on the sda disk bar, specify that Cinder be allowed 0 MB space and make Base System occupy the entire drive.
  4. Click Apply.

Networks Tab



Section Node Network Group - default
Public
In our example nodes Public IP range is 10.7.208.55-10.7.208.63 (7 used for physical servers, one reserved for HA and one reserved for virtual router).
See the Network Allocation section at the beginning of this document for details.
The rest of public IP range will be used for Floating IP in Neutron L3 section below.
In our example, Public network does not use VLAN. If you use VLAN for the Public network you should check Use VLAN tagging and set proper VLAN ID.



Storage
In this example, we select VLAN 3 for the Storage network. The CIDR is unchanged.
Management
In this example, we select VLAN 2 for the Management network. The CIDR is unchanged.
Section Settings
Neutron L2:
In this example, we set the VLAN range to 4-100. It should be aligned with the switch VLAN configuration (above).
The Base MAC address is unchanged.

Neutron L3
Floating Network Parameters: The floating IP range is a continuation of our Public IP range. In our deployment we use the 10.7.208.64 - 10.7.208.76 IP range.
Internal Network: Leave CIDR and Gateway with no changes.
Name servers: Leave DNS servers with no changes.

Other

We assign Public IP to all nodes. Make sure Assign public network to all nodes is checked.
Use the Neutron L3 HA option.
For deployment we use 8.8.8.8 and 8.8.4.4 DNS servers as well.

Save the Configuration

Click Save Settings at the bottom of page

Verify Networks

Click Verify Networks.
You should see the following message: Verification succeeded. Your network is configured correctly. Otherwise, check the log file for troubleshooting.
Note: If your public network runs a DHCP server, you can experience a verification failure. If the range selected for the cloud above is not overlapping with DHCP pool, you can ignore this message. If overlap exists, please fix it.

Deployment

Click the Deploy Changes button and view the installation progress at the nodes tab and view logs.
The OS installation will start.

When OS installation is finished, OPSTK installation on first controller starts.
Then OPSTK will be installed on the rest of the controllers, and afterwards on the Compute and Storage nodes.

The installation is completed.

Health Check

  1. Click on the Health Check tab.
  2. Check the Select All checkbox.
  3. Click Run Tests.
    All tests should pass. Otherwise, check the log file for troubleshooting.
You can now safely use the cloud.
Click the dashboard link Horizon at the top of the page.

Start the SR-IOV VM

In Liberty each VM can start by using either the standard Para-Virtual or SR-IOV network port.
By default the Para-Virtual, OVS-connected port will be used. You need to request vnic_type direct explicitly in order to assign SR-IOV NIC to the VM.
First you need to create SR-IOV Neutron port and then spawn VM with the port attached. In this example, we will show you how to start VM with SR-IOV network port using CLI, since this feature is not available in UI.

Start the SR-IOV Test VM

We provide scripts to spawn SR-IOV test VM.
    1. Login to cloud Controller node and  source openrc.
      [root@fuel ~]# ssh node-1
      root@node-1:~# source openrc
    2. Upload test SR-IOV VM
      root@node-1:~# upload_sriov_cirros
    3. Login to OpenStack Horizon, go to Images section and check to see if the cirros-testvm-mellanoxdownloaded image is listed:
    4. Start to test SR-IOV VM.
      root@node-1:~# start_sriov_vm
    5. Make sure, that SR-IOV works. Login to VM's console and run lspci command
      $ lscpci -k | grep mlx
      00:04:0 Class 0280: 15b3:1004 mlx4_core

Start the Custom Image VM

To start your own images please use below procedure.
Note: The VM must have Mellanox a NIC driver installed in order to get working network.
Ensure the VM image <image_name> has Mellanox driver installed.
We recommend that you use the most recent version of the Mellanox OFED driver. See Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED) for more information.
    1. Login to cloud Controller node and source openrc.
    2. Create an SR-IOV enabled neutron port first.
      # port_id=`neutron port-create $net_id --name sriov_port --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'`
      where $net_id is ID or Name of the network You want to use

    3. Then start new VM bind to the just created port
      # nova boot --flavor <flavor_name> --image <image_name> --nic port-id=$port_id<vm_name>
      where $port_id is ID of our just created SR-IOV port
    4. Make sure, that SR-IOV works. Login to the VM's console and run the lspci command.
      You should see the Mellanox VF in the output if SR-IOV works.
      # lspci | grep -i mellanox
      00:04.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]


Usernames and Passwords

  • Fuel server Dashboard user / password: admin / admin
  • Fuel server SSH user / password: root / r00tme
  • TestVM SSH user / password: cirros / cubswin:)
  • To get controller node CLI permissions run:  # source /root/openrc

Prepare Linux VM Image for CloudX

In order to have network and RoCE support on the VM, MLNX_OFED (2.2-1 or later) should be installed on the VM environment.
(In case of CentOS/RHEL OS, you can use virt-manager to open existing VM image and perform MLNX_OFED installation).

Known Issues:
Issue #
Description
Workaround
Link to the Bug (in Launchpad)
1
The default number of supported virtual functions (VFs),16, is not sufficient.
To have more vNICs available, contact Mellanox Support.
2
Snapshot creation of running instance fails.
To work this issue around, shut down the instance before taking a snapshot.
3
Third party adapters based on the Mellanox chipset may not have SR-IOV enabled by default.
Apply to the device manufacturer for configuration instructions and for the required firmware.

4In some cases the speed of the Mellanox interface can be shown as 10Gb/s.


Tidak ada komentar:

Posting Komentar