HowTo Install Mirantis OpenStack 8.0 with Mellanox ConnectX-3 Pro Adapters Support (Ethernet Network + VLAN Segmentation)
Version 33
This post shows how to set up and configure Mirantis Fuel 8 (OpenStack Liberty based on Ubuntu 14.04) to support Mellanox ConnectX-3/ConnectX-3 Pro adapters. This procedure enables SR-IOV mode for the VMs on the compute nodes and iSER transport mode for the storage nodes.
- Related References
- Setup Diagram
- Setup Hardware Requirements (Example)
- Storage Server RAID Setup
- Network Physical Setup
- Networks Allocation (Example)
- Install the Deployment Server
- Configure the Deployment Node for Running the Fuel VM
- Create and Configure the new VM To Run the Fuel Master Installation
- Install the Fuel Master from an ISO Image
- Install the Mellanox Plugin
- Configure the OpenStack Environment
- Deployment
- Health Check
- Start the SR-IOV VM
Related References
- Mellanox CloudX for OpenStack page
- Mellanox CloudX, Mirantis Fuel Solution Guide
- Reference Architectures and Planning Guide — Mirantis OpenStack v8.0 | Documentation
- Mirantis Fuel ISO Download page
- MLNX-OS User Manual - (located at support.mellanox.com )
- HowTo upgrade MLNX-OS Software on Mellanox switches
- Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED)
Before reading this post, make sure you are familiar with Mirantis Fuel 8.0 installation procedures.
Setup Diagram
Note: Besides the Deployment node, all nodes should be connected to all five networks.
Note: Server’s IPMI and the switches management interfaces wiring and configuration are out of scope. You need to ensure that there is management access (SSH) to Mellanox switch SX1710 to perform the configuration.
Setup Hardware Requirements (Example)
Component | Quantity | Description |
---|---|---|
Deployment Node | 1 |
DELL PowerEdge R620
|
Cloud Controllers and Compute servers:
| 6 | HP DL360p G8
|
Cloud Storage Server | 1 | Supermicro X9DR3-F
|
Admin (PXE) and Public switch | 1 | 1Gb switch with VLANs configured to support both networks |
Ethernet Switch | 1 | Mellanox SX1710 VPI 36 ports 56Gb/s switch configured in Ethernet mode. |
Cables |
16 x 1Gb CAT-6e for Admin (PXE) and Public networks
7 x 56GbE copper cables up to 2m (MC2207130-XXX)
|
Note: You can use Mellanox ConnectX-3 Pro EN (MCX313A-BCCT) or Mellanox ConnectX-3 Pro VPI (MCX353-FCCT) adapter cards.
Note: Please make sure that the Mellanox switch is set as Ethernet.
Storage Server RAID Setup
- Two SSD drives in bays 0-1 configured in RAID-1 (Mirror) are used for the OS.
- Twenty-two SSD drives in bays 3-24 configured in RAID-10 are used as a Cinder volume and will be configured on the RAID drive.
Network Physical Setup
- Connect all nodes to the Admin (PXE) 1GbE switch (preferably through the eth0 interface on board).
We recommend that you record the MAC address of the Controller and Storage servers to make Cloud installation easier (see Controller Node section below in Nodes tab).Note: All cloud servers should be configured to run PXE boot over the Admin (PXE) network. - Connect all nodes to the Public 1GbE switch (preferably through the eth1 interface on board).
- Connect Port #1 of the ConnectX-3 Pro to the SX1710 Ethernet switch (Private, Management, and Storage networks).Note: The interface names (eth0, eth1, p2p1, etc.) may vary between servers from different vendors.Rack Setup Example
Deployment Node
Compute and Controller Nodes
Storage NodeThe configuration is the same as it is for the Compute and Controller nodes. - Configure the required VLANs and enable flow control on the Ethernet switch ports.All related VLANs should be enabled on the 56GbE switch (Private, Management, Storage networks).On Mellanox switches, use the command flow below to enable VLANs (e.g. VLAN 1-100 on all ports).Note: Refer to the MLNX-OS User Manual to get familiar with switch software (located atsupport.mellanox.com).
Note: Before starting use of the Mellanox switch, it is recommended to upgrade the switch to the latest MLNX-OS version.switch > enable
Flow control is required when running iSER (RDMA over RoCE - Ethernet).
switch # configure terminal
switch (config) # vlan 1-100
switch (config vlan 1-100) # exit
switch (config) # interface ethernet 1/1 switchport mode hybrid
switch (config) # interface ethernet 1/1 switchport hybrid allowed-vlan all
switch (config) # interface ethernet 1/2 switchport mode hybrid
switch (config) # interface ethernet 1/2 switchport hybrid allowed-vlan all
...
switch (config) # interface ethernet 1/36 switchport mode hybrid
switch (config) # interface ethernet 1/36 switchport hybrid allowed-vlan all
On Mellanox switches, run the following command to enable flow control on the switches (on all ports in this example):switch (config) # interface ethernet 1/1-1/36 flowcontrol receive on force
To save the configuration (permanently), run:
switch (config) # interface ethernet 1/1-1/36 flowcontrol send on forceswitch (config) # configuration write
Networks Allocation (Example)
The example in this post is based on the network allocation defined in this table:
Network | Subnet/Mask | Gateway | Notes |
---|---|---|---|
Admin (PXE) | 10.20.0.0/24 | N/A | The network is used to provision and manage cloud nodes by the Fuel Master. The network is enclosed within a 1Gb switch and has no routing outside. 10.20.0.0/24 is the default Admin (PXE) subnet and we use it with no changes. |
Management | 192.168.0.0/24 | N/A | This is the Cloud Management network. The network uses VLAN 2 in SX1710 over 56Gb interconnect. 192.168.0.0/24 is the default Management subnet and we use it with no changes. |
Storage | 192.168.1.0/24 | N/A | This network is used to provide storage services. The network uses VLAN 3 in SX1710 over 56Gb interconnect. 192.168.1.0/24 is the default Storage subnet and we use it with no changes. |
Public and Neutron L3 | 10.7.208.0/24 | 10.7.208.1 |
Public network is used to connect Cloud nodes to an external network.
Neutron L3 is used to provide Floating IP for tenant VMs.
Both networks are represented by IP ranges within same subnet with routing to external networks.
All Cloud nodes will have Public IP address. In additional you shall allocate 2 more Public IP addressees ·
We do not use virtual router in our deployment but still need to reserve Public IP address for it. So Public Network range is an amount of cloud nodes + 2. For our example with 7 Cloud nodes we need 9 IPs in Public network range.
Note: Consider a larger range if you are planning to add more servers to the cloud later
In our build we will use 10.7.208.53 >> 10.7.208.76 IP range for both Public and Neutron L3.
IP allocation will be as follows:
|
The following scheme illustrates the public IP range allocation for the Fuel VM, all setup nodes and Neutron L3 floating IP range.
Install the Deployment Server
In our setup we install CentOS version 7.2, 64-bit distribution. We used the CentOS-7-x86_64-Minimal-1511.iso image and installed the minimal configuration. We will install all missing packages later.
The two 1Gb interfaces are connected to Admin(PXE) and Public networks:
- em1 (1st interface) connected to Admin (PXE) and configured statically. The configuration will not be actually used but will save time on bridge creation later
- IP: 10.20.0.1
- Netmask: 255.255.255.0
- Gateway: N/A
- em2 (2nd interface) connected to Public and configured statically:
- IP: 10.7.208.53
- Netmask: 255.255.255.0
- Gateway: 10.7.208.1
Configure the Deployment Node for Running the Fuel VM
Login to Deployment node by SSH or locally and perform actions listed below:
- Disable the Network Manager.
# sudo systemctl stop NetworkManager.service
# sudo systemctl disable NetworkManager.service - Install packages required for virtualization.
# sudo yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install virt-manager
- Install packages required for x-server.
# sudo yum install xorg-x11-xauth xorg-x11-server-utils xclock
- Reboot the Deployment server.
Create and Configure the new VM To Run the Fuel Master Installation
- Start the VM.
# virt-manager
- Create a new VM using the virt-manager wizard.
- During the creation process provide VM with four cores, 8GB of RAM and 200GB disk.Note: For details see the Fuel Master node hardware requirements section in Mirantis OpenStack Planning Guide — Mirantis OpenStack v8.0 | Documentation.
- Mount the Fuel installation disk to VM virtual CD-ROM device.
- Configure the network so the Fuel VM will have two NICs connected to Admin (PXE) and Public networks.
- Use virt-manager to create bridges.
- br-em1, the Admin (PXE) network bridge used to connect Fuel VM's eth0 to Admin (PXE) network.
- br-em2, the Public network bridge used to connect Fuel VM's eth1 to Public network.
- Connect the Fuel VM eth0 interface to br-em1.
- Add to the Fuel VM eth1 network interface and connect it to br-em2.Note: You can define any other names for bridges. In this example names were defined to match names of the connected physical interfaces of the Deployment node.
- Use virt-manager to create bridges.
- Save settings and start VM.
Install the Fuel Master from an ISO Image
Note: Avoid starting the other nodes except for the fuel master until Mellanox plugin is installed.
- Boot Fuel Master Server from the ISO image as a virtual DVD (click here to download ISO image).
- Choose option 1. and press the TAB button to edit default options:
a. Remove the default gateway (10.20.0.1).
b. Change the DNS to 10.20.0.2 (the Fuel VM IP).
c. Add the following command to the end: "showmenu=yes"
The tuned boot parameters should look like this:Note: Do not try to change eth0 to another interface or the deployment might fail. - Fuel VM will reboot itself after the initial installation is completed and the Fuel menu will appear.Note: Ensure that the VM will start from the Local Disk and not CD-ROM. Otherwise you will restart the installation from beginning.
- Begin the network setup:
- Configure eth0 as the PXE (Admin) network interface.
Ensure the default Gateway entry is empty for the interface. The network is enclosed within the switch and has no routing outside.
Select Apply. - Configure eth1 as the Public network interface.
The interface is routable to LAN/internet and will be used to access the server. Configure the static IP address, netmask and default gateway on the public network interface.
Select Apply.
- Configure eth0 as the PXE (Admin) network interface.
- Set the PXE Setup. The PXE network is enclosed within the switch. Use the default settings.
Press the Check button to ensure no errors are found. - Set the Time Sync.
a. Choose the Time Sync option on the left-hand Menu.
b. Configure the NTP server entries suitable for your infrastructure.
c. Press Check to verify settings. - Proceed with the installation.
Navigate to Quit Setup and select Save and Quit.
Once the Fuel installation is done, you will see Fuel access details both for SSH and HTTP. - Configure the Fuel Master VM SSH server to allow connections from Public network.By default Fuel will accept SSH connections from Admin(PXE) network only.Follow the below steps to allow connections from Public Network:
- Use virt-manager to access Fuel Master VM console
- Edit sshd_config:
# vi /etc/ssh/sshd_config
- Find and comment this line:
ListenAddress 10.20.0.2
- Restart sshd:
# service sshd restart
- Access Fuel using one of the following:
- Web UI by http://10.7.208.54:8000 (use admin/admin as user/password)
- SSH by connect to 10.7.208.54 (use root/r00tme as user/password)
Install the Mellanox Plugin
Mellanox plugin configures support for Mellanox ConnectX-3 Pro network adapters, enabling high-performance SR-IOV compute traffic networking, iSER (iSCSI) block storage networking which reduces CPU overhead, boosts throughput, reduces latency, and enables network traffic to bypass the software switch layer.
Follow the steps below to install the plugin. For the complete instructions, please refer to: HowTo Install Mellanox OpenStack Plugin for Mirantis Fuel 8.0).
- Login to the Fuel Master, download the Mellanox plugin rpm from here and store it on your Fuel Master server.
[root@fuel ~]# wget http://bgate.mellanox.com/openstack/openstack_plugins/fuel_plugins/8.0/plugins/mellanox-plugin-3.0-3.0.0-1.noarch.rpm
- Install plugin from download directory:
# fuel plugins --install=mellanox-plugin-3.0-3.0.0-1.noarch.rpm
Note: The Mellanox plugin replaces the current bootstrap image, the original image is backed up in /opt/old_bootstrap_image/. - Verify that the plugin was successfully installed.It should be displayed when running the fuel plugins command.
- Create a new Bootstrap image to enable Mellanox hardware detection:
Run the create_mellanox_vpi_bootstrap command.# create_mellanox_vpi_bootstrap
- Reboot all nodes, including already-discovered nodes.
To reboot already-discovered nodes by running the following command:# reboot_bootstrap_nodes -a
- Run the fuel nodes command to check if there are any already-discovered nodes.
Create a new OpenStack Environment
Open in WEB browser (for example: http://10.7.208.54:8000) and log into Fuel environment usingadmin/admin as the username and password.
2. Configure the new environment wizard as follows:
- Name and Release
- Name: SR-IOV
- Release: Liberty on Ubuntu 14.04
- Compute
- QEMU-KVM
- Network
- Neutron with VLAN segmentation
- Storage Backend
- Block storage: LVM
- Additional Services
- None
- Finish
- Click Create button.
Configure the OpenStack Environment
Settings Tab
- Specify that KVM is the hypervisor type.
KVM is required to enable Mellanox Openstack features.
Open the Settings tab, select Compute section, and then choose KVM as the hypervisor type. - Enable the desired Mellanox OpenStack features.
- Open the Other section.
- Select relevant Mellanox plugin versions if you have multiple versions installed.
- Enable SR-IOV support.
- Check SR-IOV direct port creation in private VLAN networks (Neutron).
- Set the desired number of Virtual NICs.Note: Relevant for VLAN segmentation onlyNote: The number of virtual NICs is amount of virtual functions (VFs) that will be available on the Compute node.Note: One VF will be utilized for iSER storage transport if you choose to use iSER. In this case you will get one VF less for Virtual Machines.
- Set the desired number of Virtual NICs.
- Select the Support quality of service over VLAN networks and ports (Neutron).
If selected, Neutron "Quality of service" (QoS) will be enabled for VLAN networks and ports over Mellanox HCAs.Note: This feature is supported only if SR-IOV is enabled. First enable SR-IOV and then enable QoS. - Enable the ISCSI Extension over RDMA (iSER) protocol for volumes (Cinder).
By enabling this feature you will use the iSER block storage transport instead or ISCSI. iSER provides improved latency, better bandwidth and reduced CPU overhead.
Note: A dedicated Virtual Function will be reserved for a storage endpoint and the priority flow control has to be enabled on the switch side port.
- Check SR-IOV direct port creation in private VLAN networks (Neutron).
Nodes Tab
Server Discovery by Fuel
This section assigns Cloud roles to servers. Servers should be discovered by Fuel. So you must make sure the servers are configured for PXE boot over Admin (PXE) network. When done, reboot the servers and wait for them to be discovered. Discovered nodes will be listed in top right corner of the Fuel dashboard.
Now you can add UNALLOCATED NODES to the setup.
First you can add Controller, Storage, and then Compute nodes. A description of how to select each follows.
First you can add Controller, Storage, and then Compute nodes. A description of how to select each follows.
Add Controller Nodes
- Click Add Node.
- Identify three controller nodes. Use the last four Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. Assign the node's role as a Controller node.
- Click the Apply Changes button.
Add the Storage Node
- Click Add Node.
- Identify your storage node. Use the last four Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. In our example this is the only Supermicro server, so identification by vendor is easy. Specify this node as a Storage - Cinder node.
- Click Apply Changes.
Add Compute Nodes
- Click Add Node.
- Select all the nodes that are remaining and specify them as having Compute roles.
- Click Apply Changes.
Configure Interfaces
In this step, each network must be mapped to a physical interface for each node. You can choose and configure multiple nodes in parallel.
In case of HW differences between selected nodes (like the number of network ports), bulk configuration is not allowed. If you perform a bulk configuration, the Configure Interfaces button displays an error icon.
The example below allows configuring six nodes in parallel. The seventh node (Supermicro storage node) will be configured separately.
In this example, we set the Admin (PXE) network to eno1 and the Public network to eno2.
The Storage, Private, and Management networks should run on the ConnectX-3 adapters 56GbE port.
Note: In some cases the speed of Mellanox interface can be shown as 10Gb/s.
Click Back To Node List and perform network configuration for Storage Node.
Configure Disks
Note: There is no need to change the defaults for the Controller and Compute nodes unless the changes are required. For the Storage node we recommend that you allocate only high performing RAID for Cinder storage. The small disk will be allocated to Base System.
- Select the Storage node.
- Press the Disk Configuration button.
- Click on the sda disk bar, specify that Cinder be allowed 0 MB space and make Base System occupy the entire drive.
- Click Apply.
Networks Tab
Section Node Network Group - default
Public
In our example nodes Public IP range is 10.7.208.55-10.7.208.63 (7 used for physical servers, one reserved for HA and one reserved for virtual router).
See the Network Allocation section at the beginning of this document for details.
The rest of public IP range will be used for Floating IP in Neutron L3 section below.
In our example, Public network does not use VLAN. If you use VLAN for the Public network you should check Use VLAN tagging and set proper VLAN ID.
In our example, Public network does not use VLAN. If you use VLAN for the Public network you should check Use VLAN tagging and set proper VLAN ID.
Storage
In this example, we select VLAN 3 for the Storage network. The CIDR is unchanged.
Management
Section Settings
Neutron L2:
In this example, we set the VLAN range to 4-100. It should be aligned with the switch VLAN configuration (above).
The Base MAC address is unchanged.
The Base MAC address is unchanged.
Neutron L3
Floating Network Parameters: The floating IP range is a continuation of our Public IP range. In our deployment we use the 10.7.208.64 - 10.7.208.76 IP range.
Internal Network: Leave CIDR and Gateway with no changes.
Name servers: Leave DNS servers with no changes.
Other
We assign Public IP to all nodes. Make sure Assign public network to all nodes is checked.
Use the Neutron L3 HA option.
For deployment we use 8.8.8.8 and 8.8.4.4 DNS servers as well.
Save the Configuration
Click Save Settings at the bottom of page
Verify Networks
Click Verify Networks.
You should see the following message: Verification succeeded. Your network is configured correctly. Otherwise, check the log file for troubleshooting.
You should see the following message: Verification succeeded. Your network is configured correctly. Otherwise, check the log file for troubleshooting.
Note: If your public network runs a DHCP server, you can experience a verification failure. If the range selected for the cloud above is not overlapping with DHCP pool, you can ignore this message. If overlap exists, please fix it.
Deployment
Click the Deploy Changes button and view the installation progress at the nodes tab and view logs.
The OS installation will start.
When OS installation is finished, OPSTK installation on first controller starts.
Then OPSTK will be installed on the rest of the controllers, and afterwards on the Compute and Storage nodes.
The installation is completed.
Health Check
All tests should pass. Otherwise, check the log file for troubleshooting.
You can now safely use the cloud.
Click the dashboard link Horizon at the top of the page.
Start the SR-IOV VM
In Liberty each VM can start by using either the standard Para-Virtual or SR-IOV network port.
By default the Para-Virtual, OVS-connected port will be used. You need to request vnic_type direct explicitly in order to assign SR-IOV NIC to the VM.
First you need to create SR-IOV Neutron port and then spawn VM with the port attached. In this example, we will show you how to start VM with SR-IOV network port using CLI, since this feature is not available in UI.
Start the SR-IOV Test VM
We provide scripts to spawn SR-IOV test VM.
- Login to cloud Controller node and source openrc.[root@fuel ~]# ssh node-1
root@node-1:~# source openrc - Upload test SR-IOV VM
root@node-1:~# upload_sriov_cirros
- Login to OpenStack Horizon, go to Images section and check to see if the cirros-testvm-mellanoxdownloaded image is listed:
- Start to test SR-IOV VM.
root@node-1:~# start_sriov_vm
- Make sure, that SR-IOV works. Login to VM's console and run lspci command$ lscpci -k | grep mlx
00:04:0 Class 0280: 15b3:1004 mlx4_core
Start the Custom Image VM
To start your own images please use below procedure.
Note: The VM must have Mellanox a NIC driver installed in order to get working network.
Ensure the VM image <image_name> has Mellanox driver installed.
We recommend that you use the most recent version of the Mellanox OFED driver. See Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED) for more information.
We recommend that you use the most recent version of the Mellanox OFED driver. See Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED) for more information.
- Login to cloud Controller node and source openrc.
- Create an SR-IOV enabled neutron port first.
# port_id=`neutron port-create $net_id --name sriov_port --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'`
where $net_id is ID or Name of the network You want to use - Then start new VM bind to the just created port
# nova boot --flavor <flavor_name> --image <image_name> --nic port-id=$port_id<vm_name>
where $port_id is ID of our just created SR-IOV port - Make sure, that SR-IOV works. Login to the VM's console and run the lspci command.
You should see the Mellanox VF in the output if SR-IOV works.# lspci | grep -i mellanox
00:04.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
Usernames and Passwords
- Fuel server Dashboard user / password: admin / admin
- Fuel server SSH user / password: root / r00tme
- TestVM SSH user / password: cirros / cubswin:)
- To get controller node CLI permissions run: # source /root/openrc
Prepare Linux VM Image for CloudX
In order to have network and RoCE support on the VM, MLNX_OFED (2.2-1 or later) should be installed on the VM environment.
MLNX_OFED can be downloaded from http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
(In case of CentOS/RHEL OS, you can use virt-manager to open existing VM image and perform MLNX_OFED installation).
Known Issues:
Issue #
|
Description
|
Workaround
|
Link to the Bug (in Launchpad)
|
1
|
The default number of supported virtual functions (VFs),16, is not sufficient.
|
To have more vNICs available, contact Mellanox Support.
| |
2
|
Snapshot creation of running instance fails.
|
To work this issue around, shut down the instance before taking a snapshot.
| |
3
|
Third party adapters based on the Mellanox chipset may not have SR-IOV enabled by default.
|
Apply to the device manufacturer for configuration instructions and for the required firmware.
| |
4 | In some cases the speed of the Mellanox interface can be shown as 10Gb/s. |
Tidak ada komentar:
Posting Komentar