OpenStack部署-11.nova计算节点部署 作者: sysit 分类: d 发表于 2018-12-05 189人围观 ## 11.1 安装nova-compute ``` # 在全部计算节点安装nova-compute服务,以compute1节点为例 yum install openstack-nova-compute python-openstackclient openstack-utils openstack-selinux -y # centos8 yum install openstack-nova-compute python3-openstackclient openstack-utils openstack-selinux -y ``` ## 11.2 配置nova.conf ``` # 在全部计算节点操作,以computer1节点为例; # 注意“my_ip”参数,根据节点修改; # 注意nova.conf文件的权限:root:nova [root@compute1 ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak [root@compute1 ~]# egrep -v "^$|^#" /etc/nova/nova.conf [DEFAULT] debug = False log_dir = /var/log/nova state_path = /var/lib/nova allow_resize_to_same_host = true compute_driver = libvirt.LibvirtDriver my_ip = 10.29.32.11 transport_url = rabbit://openstack:password@10.29.32.7:5672,openstack:password@10.29.32.8:5672,openstack:password@10.29.32.8:5672// vcpu_pin_set = 4-31 reserved_host_memory_mb=16384 resume_guests_state_on_host_boot=true [conductor] workers = 5 [vnc] novncproxy_host = 10.29.32.11 novncproxy_port = 6080 server_listen = 10.29.32.11 server_proxyclient_address = 10.29.32.11 novncproxy_base_url = http://console.sysit.cn:6080/vnc_auto.html [oslo_concurrency] lock_path = /var/lib/nova/tmp [glance] api_servers = http://10.29.32.10:9292 num_retries = 3 debug = False [cinder] catalog_info = volumev3:cinderv3:internalURL os_region_name = RegionOne [neutron] metadata_proxy_shared_secret = METADATA_SECRET service_metadata_proxy = true auth_url = http://10.29.32.10:35357 auth_type = password project_domain_name = Default user_domain_id = default project_name = service username = neutron password = neutronpassword region_name = RegionOne valid_interfaces = internal [libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 10744136-583f-4a9c-ae30-9bfb3515526b disk_cachemodes="network=writeback" live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" inject_password = false inject_key = false inject_partition = -2 hw_disk_discard = unmap virt_type=kvm snapshot_image_format=raw [upgrade_levels] compute = auto [privsep_entrypoint] helper_command = sudo nova-rootwrap /etc/nova/rootwrap.conf privsep-helper --config-file /etc/nova/nova.conf [guestfs] debug = False [placement] auth_type = password auth_url = http://10.29.32.10:35357 username = placement password = placementpassword user_domain_name = Default project_name = service project_domain_name = Default region_name = RegionOne valid_interfaces = internal [notifications] notification_format = unversioned ``` ## 11.3 集成ceph * 安装ceph客户端 ``` # ceph与nova-compute融合部署的节点不需要此操作。 # 与nova-compute服务所在节点需要安装ceph-common,以compute1节点为例 [root@compute1 ~]# yum install ceph-common -y ``` * 推送ceph.conf ``` # ceph与nova-compute融合部署的节点不需要此操作。 # 需要推送ceph.conf文件到运行nova-compute服务的节点上。 # 注意:nova需要有读取ceph.conf的权限。 [root@storage1 ~]# scp /etc/ceph/ceph.conf root@compute1:/etc/ceph/ceph.conf ``` * 推送ceph秘钥 ``` # nova-compute所在节点需要将client.cinder用户的秘钥文件存储到libvirt中;当基于ceph后端的cinder卷被attach到虚拟机实例时,libvirt需要用到该秘钥以访问ceph集群; #将client.cinder用户的秘钥推送到运行nova-compute服务的节点 [root@storage1 ~]# ceph auth get-key client.cinder | ssh root@compute1 tee /etc/ceph/client.cinder.key ``` * libvirt注入秘钥 ``` # 在运行nova-compute服务的计算节点将秘钥加入libvirt,以compute1节点为例。 # ceph集群关闭cephx认证的环境下不需要此操作。 # 此秘钥与前文中cinder-volume部署中的uuid保持一致。 #添加秘钥 [root@compute1 ~]# cd /etc/ceph [root@compute1 ceph]# cat > secret.xml <<'EOF' <secret ephemeral='no' private='no'> <uuid>10744136-583f-4a9c-ae30-9bfb3515526b</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF [root@compute1 ceph]# virsh secret-define --file secret.xml [root@compute1 ceph]# virsh secret-set-value --secret 10744136-583f-4a9c-ae30-9bfb3515526b --base64 $(cat /etc/ceph/client.cinder.key) # 注册完成之后可以删除文件,建议保留 #[root@compute1 ceph]# rm client.cinder.key secret.xml ``` * 修改ceph.conf ``` 如果需要从ceph rbd中启动虚拟机,必须将ceph配置为nova的临时后端; # 推荐在计算节点的配置文件中启用rbd cache功能; # 为了便于故障排查,配置admin socket参数,这样每个使用ceph rbd的虚拟机都有1个socket将有利于虚拟机性能分析与故障解决; # 相关配置只涉及全部计算节点ceph.conf文件的[client]与[client.cinder]字段。 # 全部运行nova-compute服务的节点操作,以compute1节点为例 [root@compute1 ~]# vim /etc/ceph/ceph.conf [client] rbd cache = true rbd cache writethrough until flush = true admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/qemu/qemu-guest-$pid.log rbd concurrent management ops = 20 [client.cinder] keyring = /etc/ceph/ceph.client.cinder.keyring # 创建ceph.conf文件中指定的socker与log相关的目录,并更改属主 [root@compute1 ~]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/ [root@compute1 ~]# chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/ ``` * 配置nova.conf ``` # 在全部计算节点配置nova后端使用ceph集群的vms池,以compute1节点为例 # 如果没有配置cephx认证,则删除rbd_user和rbd_secret_uuid参数 [root@compute1 ~]# vi /etc/nova/nova.conf [libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder #rbd_secret_uuid与前文cinder-volume部署中的uuid保持一致 rbd_secret_uuid = 10744136-583f-4a9c-ae30-9bfb3515526b disk_cachemodes="network=writeback" live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" # 禁用文件注入 inject_password = false inject_key = false inject_partition = -2 # 虚拟机临时root磁盘discard功能,“unmap”参数在scsi接口类型磁盘释放后可立即释放空间 hw_disk_discard = unmap virt_type=kvm snapshot_image_format=raw ``` ## 11.4 启动服务 ``` systemctl enable libvirtd.service openstack-nova-compute.service systemctl restart libvirtd.service openstack-nova-compute.service systemctl status libvirtd.service openstack-nova-compute.service ``` ## 11.5 把compute node添加到cell 数据库 > 增加新的compute节点都要如下操作,注意是在控制节点操作 ``` # 在任意控制节点操作 [root@controller1 ~]# . admin-openrc # 确认数据库中含有主机 [root@controller1 ~]# openstack compute service list --service nova-compute +----+--------------+-------------------+------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+--------------+-------------------+------+---------+-------+----------------------------+ | 12 | nova-compute | compute1.sysit.cn | nova | enabled | up | 2018-07-02T08:51:56.000000 | +----+--------------+-------------------+------+---------+-------+----------------------------+ ``` * 手工发现计算节点 ``` #手工发现计算节点主机,即添加到cell数据库 [root@controller1 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 69c78356-4dc8-4ace-a1c8-c8d216c5837f Checking host mapping for compute host 'compute1.sysit.cn': f168388b-f344-4c28-a160-7801edf2c5e3 Creating host mapping for compute host 'compute1.sysit.cn': f168388b-f344-4c28-a160-7801edf2c5e3 Found 1 unmapped computes in cell: 69c78356-4dc8-4ace-a1c8-c8d216c5837f ``` * 自动发现计算节点 ``` # 在全部控制节点操作; # 为避免新加入计算节点时,手动执行注册操作”nova-manage cell_v2 discover_hosts”,可设置控制节点定时自动发现主机; # 涉及控制节点nova.conf文件的[scheduler]字段; # 如下设置自动发现时间为5min,可根据实际环境调节 [root@controller1 ~]# vim /etc/nova/nova.conf [scheduler] discover_hosts_in_cells_interval=300 # 重启nova服务,配置生效 [root@controller1 ~]# systemctl restart openstack-nova-api.service ``` ## 11.6 验证 ``` 登陆dashboard,管理员-->计算-->虚拟机管理器 如果已注册成功,在"虚拟机管理器"标签下可发现计算节点,并能展示出各计算节点的资源;如果未注册或注册失败,则"虚拟机管理器"标签下无主机。 ``` ## 11.7 所有compute节点的互相免密登录 ssh互认证采用共用一个key的方式实现 * 所有服务器nova用户修改/bin/bash登录 ``` usermod -s /bin/bash nova echo "novapassword" | --stdin passwd nova su - nova mkdir ~/.ssh ``` * 选择任意一台服务器执行ssh-keygen ``` su - nova ssh-keygen cat >~/.ssh/config<<'EOF' Host * StrictHostKeyChecking no UserKnownHostsFile /dev/null port 22 EOF ``` * 在这一台服务器上生成authorized_keys文件 ``` ssh-copy-id nova@compute1 ``` * 将文件拷贝到其他服务器上 ``` scp authorized_keys config id_rsa nova@compute2:~/.ssh/ ``` ## 11.8 配置live-migration * 修改/etc/libvirt/libvirtd.conf ``` # 在全部计算节点操作,以compute1节点为例; # 以下给出libvirtd.conf文件的修改处 [root@compute1 ~]# egrep -v "^$|^#" /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 tcp_port = "16509" listen_addr = "10.29.32.11" auth_tcp = "none" ``` * 修改/etc/sysconfig/libvirtd ``` # 在全部计算节点操作,以compute1节点为例; # 以下给出libvirtd文件的修改处 [root@compute1 ~]# vi /etc/sysconfig/libvirtd LIBVIRTD_ARGS="--listen" ``` * 重启服务 ``` # libvirtd与nova-compute服务都需要重启 [root@compute1 ~]# systemctl restart libvirtd.service openstack-nova-compute.service # 查看服务 [root@compute1 ~]# netstat -tunlp | grep 16509 ``` * LIBVIRTD_ARGS的问题 在`CentOS8`操作系统上部署`Libvirtd`,配置`LIBVIRTD_ARGS="--listen"`后重启`libvirtd`报错如下: ``` libvirtd[11164]: --listen parameter not permitted with systemd activation sockets, see 'man libvirtd' for further guidance ``` 解决办法: ``` systemctl mask libvirtd.socket libvirtd-ro.socket \ libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket ``` 如果觉得我的文章对您有用,请随意赞赏。您的支持将鼓励我继续创作! 赞赏支持