From Fedora Project Wiki

Virtualization

In this section, we cover discussion on the @et-mgmnt-tools-list, @fedora-xen-list, @libvirt-list and @ovirt-devel-list of Fedora virtualization technologies.

Contributing Writer: Dale Bewley


Enterprise Management Tools List

This section contains the discussion happening on the et-mgmt-tools list


Fedora Xen List

This section contains the discussion happening on the fedora-xen list.


Libvirt List

This section contains the discussion happening on the libvir-list.

Libvirt 0.4.6 Released

Daniel Veillard announced[1] the release of libvirt 0.4.6. "There is no major change in this release, just the bug fixes a few improvements and some cleanup".

Improvements include:

  • add storage disk volume delete (Cole Robinson)
  • KVM dynamic max CPU detection (Guido Günther)
  • spec file improvement for minimal builds (Ben Guthro)
  • improved error message in XM configuration module (Richard Jones)
  • network config in OpenVZ support (Evgeniy Sokolov)
  • enable stopping a pool in logical storage backend and cleanup deletion of pool (Chris Lalancette)

[1] https://www.redhat.com/archives/libvir-list/2008-September/msg00380.html

RFC: Events API

David Lively began[1] a discusion on implementation of events in libvirtd.

[1] https://www.redhat.com/archives/libvir-list/2008-September/msg00321.html

Windows Binaries

Richard W.M. Jones pointed[1] out that -- while not an official distribution -- binaries for libvirt-0.dll and virsh.exe are available[2] in the mingw32-libvirt package.

[1] https://www.redhat.com/archives/libvir-list/2008-September/msg00393.html

[2] http://www.annexia.org/tmp/mingw/fedora-9/

oVirt Devel List

This section contains the discussion happening on the ovirt-devel list.

oVirt 0.93-1 Released

Perry N. Myers [1]

both the oVirt Node and oVirt Server Suite.

New features in this release include:

  • Addition of 'Smart Pools' in the Web user interface for organizing pools on a per user basis.
  • Additions to the Edit VM screen to allow re-provisioning of a guest as well editing other guest settings.
  • oVirt Appliance manages VMs directly on the host it is running on. This eliminates the 'fake nodes' used in previous versions.
  • oVirt API (Ruby Bindings)
  • Support for configuring more than one NIC per Node. UI support for this will be integrated shortly.
  • Support for bonding/failover of NICs. UI support for this will be integrated shortly.
  • SELinux support on oVirt Node
  • Rewrite of performance graphing visualization

Instructions for configuring yum to point to the ovirt.org repository: http://www.ovirt.org/download.html

Instructions for using the Appliance and Nodes: http://www.ovirt.org/install-instructions.html

[1] https://www.redhat.com/archives/ovirt-devel/2008-September/msg00491.html

Modeling LVM Storage

Chris Lalancette described[1] the outcome of a IRC chat about carving up storage with LVM.

The existing StoragePool in the current model contains zero or more StorageVolumes. Chris described adding a StorageVolume of type LVM which contains one or more iSCSI StorageVolumes and presumably fiberchannel in the future.

After the model is modified and the backend "taskomatic" code is in place, then while provisioning a guest VM the user will either choose an entire LUN guest, choose an existing logical volume, or create a new volume.

[1] https://www.redhat.com/archives/ovirt-devel/2008-September/msg00313.html

Scott Seago clarified[2] that "volumes must be of the same 'type' as the pool". An IscsiStoragePool contains IscsiStorageVolumes an LvmStoragePool contains LvmStorageVolumes. "In additon, for LvmStoragePools, we have a new association defined between it and StorageVolumes. an LvmStoragePool has 1 or more "source storage volumes""... "which for the moment must be IscsiStorageVolumes."

"When determining which storage volumes are available for guests, we'll have to filter out storage volumes which are connected to LvmStoragePools."

[2] https://www.redhat.com/archives/ovirt-devel/2008-September/msg00315.html

Steve Ofsthun asked[3] how will oVirt distinguish between logical volumes created on a whole disk assigned to a guest versus volumes used by the host. Daniel P. Berrange suggested[4] this could accomplished by creating a partition on the disk and assigning this to the guest, thereby making the guest LVM one step removed from the host.

[3] https://www.redhat.com/archives/ovirt-devel/2008-September/msg00317.html

[4] https://www.redhat.com/archives/ovirt-devel/2008-September/msg00322.html