OpenStack – Node Types and its features
OpenStack can be deployed in a single-node or multi-node configuration. For the purpose of this post I am going to assume you understand OpenStack basics and have at least done a basic installation on a single-node using RDO or another installer. If not please refer to this post which covers the basics. OpenStack is of course a culmination of loosely coupled projects that define services. A node is nothing more than a grouping of OpenStack
services that run on bare-metal, in a container or virtual machine. The purpose of a node is to provide horizontal scaling and HA.
There are four possible node types in OpenStack:
Controller, Compute, Network and Storage.
Controller node is the heart of openstack. It act like control plane for the openstack environment. The control pane handles identity (keystone), dashboard (Horizon), telemetry (ceilometer), orchestration (heat) and network server service (neutron).
The compute node runs a hypervisor (KVM, ESX, Hyper-V or XenServer). The compute node handles compute service (nova), telemetry (ceilometer) and network Open vSwitch agent service (neutron).
The network node runs networking services (neutron). It runs the neutron services for L3, metadata, DHCP and Open VSwitch. The network node handles all networking between other nodes as well as tenant networking and routing. It provides services such as DHCP and floating IPs that allow instances to connect to public networks. Neutron sits on top of Open VSwitch using either the ml2 or openvswitch plugin. Using Open vSwitch Neutron builds three network bridges: br-int, br-tun and br-ex. The br-int bridge connects all instances. The br-tun bridge connects instances to the physical NIC of the hypervisor. The br-ex bridge connects instances to external (public) networks using floating IPs. Both the br-tun and br-int bridges are visible on compute and network nodes. The br-ex bridge is only visible on network nodes.
The storage node runs storage services. It handles image service (glance), block storage (cinder), object storage (swift) and in the future shared file storage (manila). Typically a storage node would run one type of storage service: object, block or file. Glance
should run on nodes providing storage services for images (Cinder or Swift). Glance typically benefits from running on same node as its backing storage service. NetApp for example provides a storage backend that allows images to be cloned using the NetApp storage backend instead of the network.