Skip to content

arch_single_local

Pedro Ielpi edited this page Sep 16, 2024 · 46 revisions

Deploying a Single Front-end & Local Storage

In this architecture, a single Front-end runs all of the OpenNebula services, and the Virtual Machines (VMs) run on Hypervisor hosts. VM images are hosted in the Front-end, and transferred to the Hypervisor nodes as needed.

The Front-end and hypervisors are in the same flat (bridged) network.

This page briefly describes each component of the architecture and lists the corresponding configuration for automatic deployment.

For a step-by-step tutorial on deploying on this architecture, please see the OpenNebula documentation.

Storage

The Front-end hosts the image repository that contains the virtual disk images. When a Virtual Machine is instantiated, the Front-end transfers a copy of the virtual disk image to the Hypervisor node that will run the VM. By default, both the Front-end and Hypervisor nodes store these images in the directory /var/lib/one/datastores. This can be an actual directory on the root filesystem, or a symlink to any other location.

Configuring the Automatic Deployment

Datastore Mode

To use the default location (/var/lib/one/datastores as a real directory on the root filesystem), the inventory file for the deployment should contain the following snippet:

ds:
   mode: ssh

Storage Volumes

If you wish to use a dedicated volume for the datastores, you can mount it at /var/lib/one/datastores prior to the deployment.

To use a dedicated volume mounted at a custom location (e.g. /mnt/one_datastores), then you need to pre-create directories for each datastore, and assign the oneadmin system user as owner. In the inventory file use the following snippet, which will automatically create the symlinks during the deployment:

ds:

  mode: ssh
  mounts:
  - type: system
    path: /mnt/one_datastores/system/
  - type: image
    path: /mnt/one_datastores/default/
  - type: files
    path: /mnt/one_datastores/files/

The resulting directory structure on each host is shown below:

Deployed to the default location:

$ tree /var/lib/one/datastore/
/var/lib/one/datastores/
├── 0 -> /mnt/one_datastores/system/
├── 1 -> /mnt/one_datastores/default/
└── 2 -> /mnt/one_datastores/files/

Deployed to a custom location, in this case /mnt/one_datastores/:

$ tree /mnt/one_datastores/
/mnt/one_datastores/
├── system
├── default
└── files

Networking

The most basic network configuration is a flat network (bridged). We will use the main interface on the Host to connect the VMs to the Network. The interfaces used in this mode are depicted in the following picture:

Note

The playbook requires either Netplan or NetworkManager to be present in the Hosts to perform the initial configuration.

To create the virtual network for the VMs you need to pick up some IP. These IP addresses need to be reachable through the Network used by the main interface of the Host, as the VM traffic will be forwarded through it.

The following snippet shows how to define a virtual network using some IPs in the admin_net Network used by the hosts:

vn:
  admin_net:
    managed: true
      template:
        VN_MAD: bridge
        PHYDEV: eth0
        BRIDGE: br0
        AR:
          TYPE: IP4
          IP: 10.0.0.50
          SIZE: 48
        NETWORK_ADDRESS: 10.0.0.0
        NETWORK_MASK: 255.255.255.0
        GATEWAY: 10.0.0.1
        DNS: 1.1.1.1

If there is any other interface in the hosts you can use them. For example to define a dedicated VM network using bon0 and vxlan networking:

vn:
  vms_net:
    managed: true
    template:
      VN_MAD: vxlan
      PHYDEV: bond0
      BRIDGE: br1
      VLAN_ID: 123
      FILTER_IP_SPOOFING: 'NO'
      FILTER_MAC_SPOOFING: 'YES'
      GUEST_MTU: 1450
      AR:
        TYPE: IP4
        IP: 192.168.0.10
        SIZE: 100
      NETWORK_ADDRESS: 192.168.0.0
      NETWORK_MASK: 255.255.255.0
      GATEWAY: 192.168.0.1
      DNS: 192.168.0.1

OpenNebula Front-end & Services

The Ansible playbook installs a complete suite of OpenNebula services including the base daemons (oned and scheduler), the OpenNebula Flow and Gate services and Sunstone Web-UI. You can just select the OpenNebula version to install and pick a password for the oneadmin user.

all:
  vars:
    one_pass: opennebula
    one_version: '6.6'

The Sunstone Server can be deployed as a SystemD service (opennebula-sunstone) or on top of the Phusion Passenger Apache2 module (for improved performance), you can find more info about this integration here. By default Apache2 is not configured, you can enable it by defining few inventory vars:

all:
  vars:
    features:
      # Enable Passenger/Apache2 integration.
      passenger: true
    apache2_http:
      # Do NOT manage (or deploy) plain HTTP Apache2 VHOST.
      managed: false
    apache2_https:
      # Do manage and deploy HTTPS Apache2 VHOST.
      managed: true
      # NOTE: The key and certchain vars should point to existing and valid PEM files.
      key: /etc/ssl/private/opennebula-key.pem
      certchain: /etc/ssl/certs/opennebula-certchain.pem
    # Access your instance at https://myone.example.org.
    one_fqdn: myone.example.org

Note

When the Passenger integration is enabled, the opennebula-sunstone SystemD service is automatically stopped and disabled.

Enterprise Edition

You can use your enterprise distribution with the Ansible playbooks. Simply add your token to the var file. Also you can enable the Prometheus and Grafana integration part of the Enterprise Edition:

all:
  vars:
    one_token: example:example
    features:
      prometheus: true

The complete inventory file

The following file show the complete settings to install a single Front-end with two hosts using local storage:

---
all:
  vars:
    ansible_user: root
    one_version: '6.6'
    one_pass: opennebulapass
    ds:
      mode: ssh
    vn:
      admin_net:
        managed: true
        template:
          VN_MAD: bridge
          PHYDEV: eth0
          BRIDGE: br0
          AR:
            TYPE: IP4
            IP: 172.20.0.100
            SIZE: 48
          NETWORK_ADDRESS: 172.20.0.0
          NETWORK_MASK: 255.255.255.0
          GATEWAY: 172.20.0.1
          DNS: 1.1.1.1

frontend:
  hosts:
    fe1: { ansible_host: 172.20.0.7 }

node:
  hosts:
    node1: { ansible_host: 172.20.0.8 }
    node2: { ansible_host: 172.20.0.9 }

Running the Ansible Playbook

  • 1. Prepare the inventory file: Update the local.yml file in the inventory file to match your infrastructure settings. Please be sure to update or review the following variables:

    • ansible_user, update it if different from root
    • one_pass, change it to the password for the oneadmin account
    • one_version, be sure to use the latest stable version here
  • 2. Check the connection: To verify the network connection, ssh and sudo configuration run the following command:

ansible -i inventory/local.yml all -m ping -b
  • 3. Site installation: Now we can run the site playbook that install and configure OpenNebula services
ansible-playbook -i inventory/local.yml opennebula.deploy.main

After execution of the playbook is finished, your new OpenNebula cloud is ready. You can now head to the verification guide.