HowTo Install Mirantis OpenStack 8.0 with Mellanox ConnectX-4 Adapters (ETH, BOND, VXLAN @ Scale)
howto-install-mirantis-openstack-8-0-with-mellanox-connectx-4-adapters--eth--bond--vxlan---scale-x
This post describes how to setup and configure Mirantis Openstack 8.0 (Liberty based on Ubuntu 14.04 LTS) over an Ethernet 100Gb/s network with SR-IOV VM network mode and CEPH storage.
To access the IB 100Gb/s How-to manual, refer to the HowTo Install Mirantis OpenStack 8.0 with Mellanox ConnectX-4 Adapters Support (InfiniBand Network @ scale)
- Description:
- Setup Diagram
- Setup Hardware Requirements
- Server Configuration
- Physical Network Setup
- Network Switch Configuration
- Networks Allocation (Example)
- Install the Fuel Master from ISO Image:
- Maintenance updates
- Fuel routing for second and further racks
- Install Mellanox Plugin
- Creating a new OpenStack Environment
- Configuring the OpenStack Environment
- Deployment
- Health Test
- You can now safely use the cloud
- Prepare the Linux VM Image for CloudX
Related References
- Mellanox CloudX for OpenStack page
- Mellanox CloudX, Mirantis Fuel Solution Guide
- Mirantis OpenStack v8.0 Documentation — Reference Architectures and Planning Guide
- Mirantis - CEPH best practices
- Mirantis Fuel ISO Download page
- MLNX-OS User Manual - (located at support.mellanox.com )
- Mellanox NEO 1.6 Documentation
- HowTo Configure MAGP on Mellanox Switches
- HowTo upgrade MLNX-OS Software on Mellanox switches
- HowTo Install Mellanox OpenStack Plugin for Mirantis Fuel 8.0
- Download VM image: CentOS
Before reading this post, make sure you are familiar with Mirantis Openstack 8.0 installation procedures.
Before starting to use the Mellanox switches, we recommend that you upgrade the switches to the latest MLNX-OS version.
Description:
In this example we are building setup, that can be scaled in future.
Main highlights of this example are:
- Non-blocking Fat-Tree Network schema with 2 spine and 4 leaf switches and supports 64 nodes in 2 racks.
- SN2700 switches used as Spine switches (non-blocking 32 x 100Gb Eth ports)
- SN2410 used as Leaf switches – 2 per rack (46 x 25Gb downlinks, 2 x 25Gb IPL links, 8 x 100Gb Uplinks)
- The setup can scale up to 8 spine and 32 leaf switches (16 racks) and support up to 32 nodes per rack (512 nodes total) with non-blocking ratio
- Capacity can be increased up to 46 nodes per rack (736 nodes total) with blocking ratio 1.5. (Please see diagram below)
- 2 x 25Gb uplinks from hosts
- Bond works in mode Active-Active
- 3 cloud controller nodes to obtain HA mode
- CEPH storage
- All cloud nodes connected to Admin (PXE), Public, Private, Management and Storage networks
- Fuel master is running as VM on Deployment node
Notes:
You can use more controllers but amount should always be odd.
If You plan to use 3 or more racks you should consider to distribute Controller nodes in different racks to increase redundancy.
Note: The server’s IPMI wiring and configuration are out of the scope of this post.
Setup Diagram
Setup Hardware Requirements
Component | Quantity | Requirements |
---|---|---|
Deployment node | 1 |
Not high performing server to run Fuel VM and UFM software.
CPU: Intel E5-26xx or later model
HD: 250 GB or larger
RAM: 32 GB or more
NICs: 2 x 1Gb
|
Cloud Controllers and Compute servers:
| 6 |
Strong servers to run Cloud control and Tenant’s VM workload
CPU: Intel E5-26xx or later model
HD: 450 GB or larger
RAM: 128 GB or more
NICs:
|
Cloud Storage server | 2 |
Strong server with high IO performance to act as CEPH backend
CPU: Intel E5-26xx or later model
HD:
RAM: 64 GB or more
NICs:
Note: Best practice is to use 3 SEPH OSD servers with replication ratio set to 3. In this example we use 2 servers just for POC.
|
Admin (PXE) switch |
2 x
1 x
|
1Gb L2 switch with VLANs configured to support Admin (PXE) network of each rack
1Gb L3 switch capable to route traffic from both racks to Fuel master node
|
Public switch | 1 x | 1Gb L2 switch |
Leaf Switch | 4 x | Mellanox SN2410 48 x ports 25Gb/s, 8 x ports 100Gb/s switch. |
Spine Switch | 2 x | Mellanox SN2700 32 x ports 100Gb/s switch |
Cables |
18 x
20 x
8 x
|
1Gb CAT-6e for Admin (PXE) and Public networks25Gb SFP28 copper cables up to 2m (MCP2M00-Axxx)
100Gb QSFP28 copper cables up to 2m (MCP1600-Cxxx)
|
Note: This solution should work also with NIC ConnectX-4 100Gb/s and switch SN2700 as leaf switches, but amount of nodes per rack and blocking ratio will differ.
Server Configuration
There are several prerequisites for cloud hardware to work.
Please go to the BIOS of each node and make sure, that :
- Either Intel VT or AMD-V, depending on your CPU type, virtualization is enabled on all nodes, including Deployment node
- All nodes except Deployment node should be configured to boot from NIC connected to Admin (PXE ) network
Physical Network Setup
- Connect all nodes to the Admin (PXE) 1GbE switch, preferably through the eth0 interface on board.
We recommend that you record the MAC address of the Controller and Storage servers to make Cloud installation easier (see Controller Node section below in Nodes tab). - Connect all nodes to the Public 1GbE switch, preferably through the eth1 interface on board.
- Connect Port #1 of ConnectX-4 Lx of all nodes (except deployment node) to first Mellanox SN2410 switch of their rack (Private, Management, Storage networks).
- Connect Port #2 of ConnectX-4 Lx of all nodes (except deployment node) to second Mellanox SN2410 switch of their rack (Private, Management, Storage networks)Note:The interface names (eth0, eth1, p2p1, etc.) can vary between servers from different vendors.
Rack Setup Example
Deployment Node
Compute and Controller Nodes
Storage Node
This is the same as Compute and Controller nodes.
Network Switch Configuration
Note: Refer to the MLNX-OS User Manual to get familiar with switch software (located at support.mellanox.com).
Note: Before starting use of the Mellanox switch, it is recommended to upgrade the switch to the latest MLNX-OS version.
Note: Before starting use of the Mellanox switch, it is recommended to upgrade the switch to the latest MLNX-OS version.
- Now we need to configure OSPF between switches.
There are 2 ways to do that:- To configure it with Mellanox NEO please see official Mellanox How-To, point 6.4.1.1 Virtual Modular Switch™ (VMS) Wizard (page 74) for instructions.
NEO can run on any host, that can route packages to Switch Management network.
In our case, NEO is running as VM on deployment node. It is connected to Public network and can reach Switch Management network through the router.
It is up to you how to configure NEO connectivity to the Switch Management network in your case.Warning: During this process NEO will drop all settings on Mellanox switchesWarning: IPL links should be physically disconnected before running wizard or it will fail. Please connect IPL links back ONLY when wizard will complete building OSPF - If you want to configure OSPF manually, please see HowTo Configure OSPF on Mellanox Switches (Running-Config)
- To configure it with Mellanox NEO please see official Mellanox How-To, point 6.4.1.1 Virtual Modular Switch™ (VMS) Wizard (page 74) for instructions.
- Now we need to build IPL between each TOR (Leaf) pair
This example will cover 1 pair.
Please repeat same actions for each TOR pair.
Run on both switches:switch > enable
switch # configure terminal
switch [standalone: master] (config) # lacp
switch [standalone: master] (config) # lldp
switch [standalone: master] (config) # no spanning-tree
switch [standalone: master] (config) # ip routing
switch [standalone: master] (config) # protocol mlag
switch [standalone: master] (config) # dcb priority-flow-control enable force
Now we need to configure IPL between switches (sw-1 and sw-2)
On both switches run the following:switch [standalone: master] (config) # interface port-channel 1
Assigning IP addresses to VLAN interfaces and configuring peers:
switch [standalone: master] (config interface port-channel 1) # exit
switch [standalone: master] (config) # interface ethernet 1/47 channel-group 1 mode active
switch [standalone: master] (config) # interface ethernet 1/48 channel-group 1 mode active
switch [standalone: master] (config) # vlan 4001
switch [standalone: master] (config vlan 4001) # exit
switch [standalone: master] (config) # interface vlan 4001
switch [standalone: master] (config interface vlan 4001) # exit
switch [standalone: master] (config) # interface port-channel 1 ipl 1
switch [standalone: master] (config) # interface port-channel 1 dcb priority-flow-control mode on force
On sw-1 run:switch [standalone: master] (config) # interface vlan 4001
switch [standalone: master] (config interface vlan 4001) # ip address 10.10.10.1 255.255.255.0
switch [standalone: master] (config interface vlan 4001) # ipl 1 peer-address 10.10.10.2
switch [standalone: master] (config interface vlan 4001) # exit
On sw-2 run:switch [standalone: master] (config) # interface vlan 4001
switch [standalone: master] (config interface vlan 4001) # ip address 10.10.10.2 255.255.255.0
switch [standalone: master] (config interface vlan 4001) # ipl 1 peer-address 10.10.10.1
switch [standalone: master] (config interface vlan 4001) # exit
Now we need to assign Virtual IP, MAC and domain name:
On both run:switch [standalone: master] (config) # mlag-vip <DOMAIN_NAME> ip <VIRTUAL_IP> /24 force
switch [standalone: master] (config) # mlag system-mac 00:00:5E:00:01:5D
switch [standalone: master] (config) # no mlag shutdown
Please repeat same actions for each TOR pair.Note: IP addresses, as well as MACs, are just example. Please replace them with suitable for you.
Don't forget, that VIPs and MACs should be unique for each TOR pair in the fabric - Port channel creation for each port pair
Ports of ConnectX-4 are configured in LACP mode. It means that respective port peers on switch should be configured as port channels.
To do this, please run on each switch:interface mlag-port-channel 2-4
exit
interface mlag-port-channel 2-4 no shutdown
interface ethernet 1/1 mlag-channel-group 2 mode active
interface ethernet 1/2 mlag-channel-group 3 mode active
interface ethernet 1/3 mlag-channel-group 4 mode active
interface mlag-port-channel 2 lacp-individual enable force
interface mlag-port-channel 3 lacp-individual enable force
interface mlag-port-channel 4 lacp-individual enable forceNote: Please adopt this example to port numbers, relevant to you. - Configuring VLANs for each port channel
interface mlag-port-channel 2 switchport mode hybrid
interface mlag-port-channel 2 switchport hybrid allowed-vlan all
interface mlag-port-channel 3 switchport mode hybrid
interface mlag-port-channel 3 switchport hybrid allowed-vlan all
interface mlag-port-channel 4 switchport mode hybrid
interface mlag-port-channel 4 switchport hybrid allowed-vlan all - Last step is building MAGP (Multi-Active Gateway Protocol)
Please see HowTo Configure MAGP on Mellanox Switches for detailed reference.
Example below shows exact commands how to configure MAGP on 2 TOR switches in first rack.
Other racks should be configured according to IP addresses, reserved for other racks (see the Network Allocation table below)
Please take into account, that all MACs and IPs should be unique and should not repeat between racks.Step Sw-1 Sw-2 Define VLAN 2 interface vlan 2
ip address 192.168.0.2 /24
exit
protocol magp
interface vlan 2 magp 2
ip virtual-router address 192.168.0.1
ip virtual-router mac-address AA:BB:CC:DD:EE:1A
exit
interface vlan 2 ip ospf area 0.0.0.0interface vlan 2ip address 192.168.0.3 /24exitprotocol magpinterface vlan 2 magp 2ip virtual-router address 192.168.0.1ip virtual-router mac-address AA:BB:CC:DD:EE:1Aexitinterface vlan 2 ip ospf area 0.0.0.0Define VLAN 3 interface vlan 3ip address 192.168.1.2 /24exitprotocol magpinterface vlan 3 magp 3ip virtual-router address 192.168.1.1ip virtual-router mac-address AA:BB:CC:DD:EE:1Bexitinterface vlan 3 ip ospf area 0.0.0.0interface vlan 3ip address 192.168.1.3 /24exitprotocol magpinterface vlan 3 magp 3ip virtual-router address 192.168.1.1ip virtual-router mac-address AA:BB:CC:DD:EE:1Bexitinterface vlan 3 ip ospf area 0.0.0.0Define VLAN 4 interface vlan 4ip address 192.168.2.2 /24exitprotocol magpinterface vlan 4 magp 4ip virtual-router address 192.168.2.1ip virtual-router mac-address AA:BB:CC:DD:EE:1Cexitinterface vlan 4 ip ospf area 0.0.0.0interface vlan 4ip address 192.168.2.3 /24exitprotocol magpinterface vlan 4 magp 4ip virtual-router address 192.168.2.1ip virtual-router mac-address AA:BB:CC:DD:EE:1Cexitinterface vlan 4 ip ospf area 0.0.0.0
Networks Allocation (Example)
The example in this post is based on the network allocation defined in this table:
Rack # | Network | Subnet/Mask | Gateway | Notes |
---|---|---|---|---|
1 |
Admin (PXE)
| 10.20.0.0/24 | 10.20.0.1 | The network is used to provision and manage Cloud nodes by the Fuel Master. The network is enclosed within a 1Gb switch and has no routing outside of cloud. 10.20.0.0/24 is the default Fuel subnet and we use it with no changes. |
1 | Management | 192.168.0.0/24 | 192.168.0.1 |
This is the Cloud Management network. The network uses VLAN 2 in SN2410 over 25Gb interconnect.
192.168.0.0/24 is the default Fuel subnet. We will use range 192.168.0.10-254
|
1 | Storage | 192.168.1.0/24 | 192.168.1.1 | This network is used to provide storage services. The network uses VLAN 3 in SN2410 over 25Gb interconnect. 192.168.1.0/24 is the default Fuel subnet. We will use range 192.168.1.10-254 |
1 | Private | 192.168.2.0/24 | 192.168.2.1 |
This network is used to provide VXLAN services. The network uses VLAN 4 in SN2410 over 25Gb interconnect.
192.168.2.0/24 is the default Fuel subnet. We will use range 192.168.2.10-254
|
2 |
Admin (PXE)
| 10.30.0.0/24 | 10.30.0.1 | The network is used to provision and manage Cloud nodes by the Fuel Master. The network is enclosed within a 1Gb switch and has no routing outside of cloud. 10.30.0.0/24 Fuel subnet will be used. |
2 | Management | 192.168.10.0/24 | 192.168.10.1 |
This is the Cloud Management network. The network uses VLAN 2 in SN2410 over 25Gb interconnect.
192.168.10.10-254/24 will be used for it
|
2 | Storage | 192.168.11.0/24 | 192.168.11.1 | This network is used to provide storage services. The network uses VLAN 3 in SN2410 over 25Gb interconnect. 192.168.11.10-254/24 will be used for it. |
2 |
Private
| 192.168.12.0/24 | 192.168.12.1 |
This network is used to provide VXLAN services. The network uses VLAN 4 in SN2410 over 25Gb interconnect.
192.168.12.10-254/24 will be used for it |
Public and Neutron L3 | 10.7.208.0/24 | 10.7.208.1 |
Public network is used to connect Cloud nodes to an external network.
Neutron L3 is used to provide Floating IP for tenant VMs.
Both networks are represented by IP ranges within same subnet with routing to external networks.
All Cloud nodes will have Public IP address. In addition you should allocate 2 more Public IP addressees:
We do not use virtual router in our deployment but still need to reserve Public IP address for it. So Public Network range is an amount of cloud nodes + 2. For our example with 7 Cloud nodes we need 9 IPs in Public network range.
Note: Consider a larger range if you are planning to add more servers to the cloud later.
In our build we will use 10.7.208.53 >> 10.7.208.76 IP range for both Public and Neutron L3.
As we have 2 racks, this range should be divided in 2 parts.
IP allocation will be as follows:
|
The scheme below illustrates our setup IP allocation.
Install Deployment Server
In our setup we install the CentOS release 7.3 64-bit distribution. We used the CentOS-7-x86_64-Minimal-1511.iso image and installed Minimal configuration. We will install all missing packages later.
Two 1Gb interfaces are connected to Admin (PXE), and Public networks:
- em1 (first interface) is connected to Admin (PXE) and configured statically.
The configuration will not actually be used, but will save time on bridge creation later.- IP: 10.20.0.254
- Netmask: 255.255.255.0
- Gateway: N/A
- em2 (second interface) is connected to Public and configured statically:
- IP: 10.7.208.53
- Netmask: 255.255.255.0
- Gateway: 10.7.208.1
Note: Please install all OS updates and restart server before you will proceed.
Configure Deployment Node for Running Fuel and NEO Appliance VMs
Login to Deployment node by SSH or locally and perform actions listed below:
- Install the packages required for virtualization
# sudo yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install virt-manager docker-io lvm2
- Install packages, required for x-server
# sudo yum install xorg-x11-xauth xorg-x11-server-utils xclock
- Reboot the Deployment server
Create and Configure a New VM To Run the NEO Appliance node
Note: Mellanox NEO can run on any node, connected to the fabric with access to management port of Mellanox switches.
In this example VM with Mellanox Neo is running on deployment node and it is connected to Public network.
Please see Network Switch Configuration section for details.
- Download image of NEO VM from Mellanox customer portal (see the link at the top of document)
- Save this image to the deployment node
- Start virt-manager.
- Create a new VM using the virt-manager wizard.
- During the creation process provide VM with 4 cores and 8GB of RAM.
- During the creation process select saved NEO image as existing disk
- Configure network so the NEO VM will have NIC connected to Switch Management network
Use virt-manager to create bridges br-em2 - Public network bridge
It is used to connect NEO VM's eth0 to Public network. - Connect NEO VM eth0 interface to br-em2
- Save settings and start VM.
Configuring NEO
Default NEO OS login/password are root/123456
- By default, NEO has configured eth0 interface dynamically. Please assign it static IP address of Public network.
- As we are not using Mellanox UFM, edit config and comment IP address and credentials for UFM
# vi /opt/neo/providers/ib/conf/netservice.cfg
[UFM IB Params]
#ip = ip,ip,...
#ip = 10.7.208.53
#user = user,user,...
#user = admin
#password = password,password,...
#password = 123456 - Restart NEO service
# /opt/neo/neoservice restart
Create and Configure a New VM To Run the Fuel Master Installation
- Start virt-manager.
- Create a new VM using the virt-manager wizard.
- During the creation process provide VM with four cores, 8GB of RAM and 200GB disk.Note: For details see Fuel Master node hardware requirements section in Mirantis OpenStack Planning Guide — Mirantis OpenStack v8.0 | Documentation
- Mount the Fuel installation disk to VM virtual CD-ROM device.
- Configure network so the Fuel VM will have 2 NICs connected to Admin(PXE) and Public networks
- Use virt-manager to create bridges
- br-em1 - Admin(PXE) network bridge.
It is used to connect Fuel VM's eth0 to Admin (PXE) network. - br-em2 - Public network bridge
It is used to connect Fuel VM's eth1 to Public network.
- br-em1 - Admin(PXE) network bridge.
- Connect Fuel VM eth0 interface to br-em1
- Add to Fuel VM eth1 network interface and connect it to br-em2Note: You can define any other names for bridges. In this example names were defined according to names of connected physical interfaces of deployment node.
- Use virt-manager to create bridges
- Save settings and start VM.
Install the Fuel Master from ISO Image:
Note: Avoid starting the other nodes except for the Fuel Master until Mellanox plugin is installed and activated.
- Boot Fuel Master Server from the ISO image as a virtual DVD (click here to download ISO image).
- Choose option 1. and press TAB button to edit default options:
Remove the default gateway (10.20.0.1)
Change DNS to 10.20.0.2 (the Fuel VM IP)
add to the end "showmenu=yes"
The tuned boot parameters shall look like this:Note: Do not try to change eth0 to another interface or deployment may fail. - Fuel VM will reboot itself after initial installation completed and Fuel Menu will appear.Note: Ensure that VM will start from Local Disk and not CDROM. Otherwise you'll start installation from beginning.
- Network setup:
- Configure eth0 - PXE (Admin) network interface.
Ensure the default Gateway entry is empty for the interface – the network is enclosed within the switch and has no routing outside.
Select Apply. - Configure eth1 – Public network interface.
The interface is routable to LAN/internet and will be used to access the server. Configure static IP address, netmask and default gateway on the public network interface.
Select Apply.
- Configure eth0 - PXE (Admin) network interface.
- Set the PXE Setup.
The PXE network is enclosed within the switch. Do not make any changes, proceed with defaults.
Press the Check button to ensure no errors are found. - Set the Time Sync.
Choose Time Sync tab on the left menu
Configure NTP server entries suitable for your infrastructure.
Press Check to verify settings. - Proceed with the installation.
Navigate to Quit Setup and select Save and Quit.
Once the Fuel installation is done, you are provided with Fuel access details both for SSH and HTTP. - Configure Fuel Master VM SSH server to allow connections from Public network
By default Fuel will accept SSH connections from Admin(PXE) network only.
Follow the below steps to allow connections from Public Network:- Use virt-manager to access Fuel Master VM console
- Edit sshd_config:
# vi /etc/ssh/sshd_config
- Find and comment line:
ListenAddress 10.20.0.2
- Restart sshd:
# service sshd restart
- Access Fuel by:
- Web UI by http://10.7.208.54:8000 (use admin/admin as user/password)
- SSH by connect to 10.7.208.54 (use root/r00tme as user/password)
Maintenance updates
It is highly recommended to install Mirantis Fuel maintenance updates before proceeding with Cloud deployment.
Please read Mirantis Openstack v8.0 — Maintenance Updates section to know how to apply updates.
Fuel routing for second and further racks
By default, during deployment only first rack has routing to the internet using Admin (PXE) network.
This can be fixed by 2 steps:
- adding iptables rules on Fuel master node.
- Adding static route from second rack's range to proper gateway.
Please follow steps below:
- Edit file with iptables rules:
# vi /etc/sysconfig/iptables
- Find lines enabling routing for the first rack (placed in different parts of file)
-A POSTROUTING -s 10.20.0.0/24 -o e+ -m comment --comment "004 forward_admin_net" -j MASQUERADE
-A FORWARD -s 10.20.0.0/24 -i eth0 -m comment --comment "050 forward admin_net" -j ACCEPT - Use those lines as reference and add your own:
-A POSTROUTING -s 10.30.0.0/24 -o e+ -m comment --comment "004 forward_admin_net" -j MASQUERADE
-A FORWARD -s 10.30.0.0/24 -i eth0 -m comment --comment "050 forward admin_net" -j ACCEPT - Appy updated rules to enable routing
# iptables-restore /etc/sysconfig/iptables
- Validate that rules are applied by command:
# iptables -t nat -L
- Add custom route
route add -net 10.30.0.0/24 gw 10.20.0.1
Note: Replace subnet range in this command with the range that you will use for this rack.
Similar actions should be performed for every additional rack.
Install Mellanox Plugin
Mellanox plugin configures support for Mellanox ConnectX-4 network adapters, enabling VXLAN offload and reducing CPU usage while using up to 65000 tenant networks.
Follow the steps below to install the plugin. For the complete instructions, please refer to: HowTo Install Mellanox OpenStack Plugin for Mirantis Fuel 8.0)
Install Fuel Master node. For more information on how to create a Fuel Master node, please see Mirantis Fuel 8.0 documentation.
- Download the plugin rpm file for MOS8.0 from Fuel Plugin Catalog.
- To copy the plugin on the already-installed Fuel Master node, use the scp command as follows:
# scp mellanox-plugin-3.2-3.2.1-1.noarch.rpm root@<Fuel_Master_ip>:/tmp
- Install plugin from /tmp directory:
# cd /tmp# fuel plugins --install mellanox-plugin-3.2-3.2.1-1.noarch.rpm
Note: The Mellanox plugin replaces the current bootstrap image, the original image is backed up in /opt/old_bootstrap_image/ - Verify that the plugin was successfully installed. It should be displayed when running the fuel plugins command.
[root@fuel ~]# fuel plugins
id | name | version | package_version
---|-----------------|---------|----------------
1 | mellanox-plugin | 3.2.1 | 3.0.0 - Create new bootstrap image by running create_mellanox_bootstrap --link_type {eth,ib,current}For ETH setup run:
[root@fuel ~]# create_mellanox_bootstrap --link_type eth
Try to build image with data:
bootstrap:
certs: null
container: {format: tar.gz, meta_file: metadata.yaml}
. . .. . .. . .Bootstrap image f790e9f8-5bc5-4e61-9935-0640f2eed949 has been activated. - Reboot all, even already discovered nodes.
You can do that manually or use bellow command to reboot already discovered nodes# reboot_bootstrap_nodes -a
Creating a new OpenStack Environment
Open in WEB browser (for example: http://10.7.208.54:8000) and log into Fuel environment using admin/admin as the username and password.
- Open a new environment in the Fuel dashboard. A configuration wizard will start.
- Configure the new environment wizard as follows:
- Name and Release
- Name: ETH VXLAN
- Release: Liberty on Ubuntu 14.04
- Compute
- QEMU-KVM
- Network
- Neutron with tunneling segmentation
- Storage Backend
- Block storage: CEPH
- Object storage: CEPH
- Image storage: CEPH
- Ephemeral storage: CEPH
- Additional Services
- None
- Finish
- Click Create button
- Name and Release
- Click on the new environment created and proceed with environment configuration.
Configuring the OpenStack Environment
Settings Tab
- UEnable KVM hypervisor type.
KVM is required to enable Mellanox Openstack features Open the Settings tab, select Compute section and then choose KVM hypervisor type.
Open the Settings tab, select Compute section and then choose KVM hypervisor type. - Update Storage settingsNote: Best practice is to use 3 SEPH OSD servers with replication ratio set to 3.In this example we use 2 servers just for POC.
- Enable desired Mellanox OpenStack features.
- Open the Other section
- Select relevant Mellanox plugin versions if you have multiple versions installedNote: This is required to enable Mellanox OFED featuring VXLAN offload.
Nodes Tab
Servers Discovery by Fuel
This section assigns Cloud roles to servers. Servers should be discovered by Fuel, hence, make sure the servers are configured for PXE boot over Admin (PXE) network. When done, reboot the servers and wait for them to be discovered. Discovered nodes will be counted in top right corner of the Fuel dashboard.
Now you may add UNALLOCATED NODES to the setup.
First you may add Controller, Storage, and then Compute nodes.
First you may add Controller, Storage, and then Compute nodes.
Add Controller Nodes
- Click Add Node.
- Identify 3 controller node. Use the last 4 Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. Assign the node's role to be a Controller node.
- Click Apply Changes button
Add Storage Nodes
Note: Best practice is to use 3 SEPH OSD servers with replication ratio set to 3.
In this example we use 2 servers just for POC.
- Click Add Node.
- Identify your storage nodes. Use the last 4 Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. Select this nodes to be a Storage - Ceph OSD nodes.
- Click Apply Changes button
Add Compute Nodes
- Click Add Node.
- Select all the nodes that are left and assign them the Compute role.
- Click Apply Changes.
Configure Interfaces
In this step, each network must be mapped to a physical interface for each node. You can choose and configure multiple nodes in parallel.
In case of HW differences between selected nodes (like the number of network ports), bulk configuration is not allowed. If you do a bulk configuration, the Configure Interfaces button will have an error icon.
Select all Controller and Compute nodes (6 nodes in this example) in parallel. The 7th node (Supermicro storage node) will be configured separately
In this example, we set the Admin (PXE) network to eno1 and the Public network to eno2 interfaces.
The Storage, Private, and Management networks should run on the ConnectX-4 Lx adapters 25GbE port.
In order to have redundancy, both ports of ConnectX-4 Lx adapters should be joined in the bond interface mode for 802.3ad (LACP).
This is an example:
Click Back To Node List and perform network configuration for Storage Node by analogy.
Note: In some cases speed of Mellanox interface can be shown not correctly, for example as 34.5 Gb/s in above image
Configure disks
Note: There is no need to change the defaults for the Controller and Compute nodes unless the changes are required. For the Storage node it is recommended to allocate only high performing RAID as Cinder storage. The small disk shall be allocated to Base System.
Note: Best practice is to use 3 SEPH OSD servers with replication ratio set to 3.
In this example we use 2 usual servers just for POC. Please see hardware requirements at the top of this document.
- Select the Storage node.
- Press the Disk Configuration button.
- Click on the sda disk bar, set Ceph allowed space to 0 MB and make Base System occupy the entire drive.
- Click Apply.
Networks Tab
As we are building cloud with 2 racks, we need to add new Node Network Group.
Let's name it as Rack2. Also, to avoid mess, let's name default Node Network Group as Rack1.
Section Node Network Group - Rack1
Public:
In our example nodes Public IP range is 10.7.208.56-10.7.208.64 (7 used for physical servers, 1 reserved for HA and 1 reserved for virtual router).
It is divided into 2 parts:
Rack1: 10.7.208.55-10.7.208.59
Rack2: 10.7.208.60-10.7.208.63
See Network Allocation section at the beginning of this doc for details.
The rest of public IP range will be used for Floating IP in Neutron L3 section below
In our example, Public network does not use VLAN. If you use VLAN for Public network you should check Use VLAN tagging and set proper VLAN ID.
In our example, Public network does not use VLAN. If you use VLAN for Public network you should check Use VLAN tagging and set proper VLAN ID.
Storage:
In this example, we select VLAN 3 for the storage network. The CIDR should be changed to:
IP range: 192.168.1.10 - 192.168.1.254
IP range: 192.168.1.10 - 192.168.1.254
Gateway: 192.168.1.1
Management:
In this example, we select VLAN 2 for the management network. The CIDR should be changed to:
IP range: 192.168.0.10 - 192.168.0.254
Gateway: 192.168.0.1
Private:
In this example, we select VLAN 4 for the management network. The CIDR should be changed to:
IP range: 192.168.2.10 - 192.168.2.254
Gateway: 192.168.2.1
Section Node Network Group - Rack2
Admin (PXE):
CIDR: 10.30.0.0/24, use the whole CIDR
Note: Please use range, relevant for you. This should be the range, configured in Fuel master's firewall.
Public:
Storage:
In this example, we select VLAN 3 for the storage network. The CIDR should be changed to:
IP range: 192.168.11.10 - 192.168.11.254
IP range: 192.168.11.10 - 192.168.11.254
Gateway: 192.168.11.1
Management:
In this example, we select VLAN 2 for the management network. The CIDR should be changed to:
IP range: 192.168.10.10 - 192.168.10.254
Gateway: 192.168.10.1
Private:
In this example, we select VLAN 4 for the management network. The CIDR should be changed to:
IP range: 192.168.12.10 - 192.168.12.254
Gateway: 192.168.12.1
Section Settings
Neutron L2:
In this example, we leave Tunnel ID range untouched.
The base MAC is left untouched.
The base MAC is left untouched.
Neutron L3:
Floating Network Parameters: the floating IP range is continuation of our Public IP range. In our deployment we use 10.7.208.65 - 10.7.208.76 IP range
Internal Network: Leave CIDR and Gateway with no changes.
Name servers: Leave DNS servers with no changes.
Other:
We assign Public IP to all nodes. Make sure Assign public network to all nodes is checked
We use Neutron L3 HA option.
For deployment we'll use 8.8.8.8 and 8.8.4.4 DNS servers as well.
Save Configuration
Click Save Settings at the bottom of page
Verify Networks
Verify Networks function is not available if using cloud with several racks.
Deployment
Click the Deploy Changes button and view the installation progress at the nodes tab and view logs.
OS installation will start.
When OS installation is finished, OPSTK installation on 1st controller will occur.
Then OPSTK will be installed on rest of controllers, and afterwards on Compute and Storage nodes.
Health Test
- Click on the Health Test tab.
- Check the Select All checkbox.
- Click Run Tests.
All tests should pass. Otherwise, check the log file for troubleshooting.
You can now safely use the cloud
Click the dashboard Tab and then Horizon link to enter Openstack dashboard
Prepare the Linux VM Image for CloudX
In order to have network and RoCE support on the VM, MLNX_OFED (use recent version) should be installed on the VM.
MLNX_OFED can be downloaded from http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers.
(In case of CentOS/RHEL OS, you can use virt-manager to open the existing VM image and perform the MLNX_OFED installation).
Known Issues:
Issue # | Description | Workaround |
---|---|---|
1 | Verify network doesn't work when using more then 1 rack | N/A |
Tidak ada komentar:
Posting Komentar