From Fedora Project Wiki

(→‎Other Improvements: add xenwatch)
(add missing slash)
Line 29: Line 29:
* Connect to an iSCSI server and list volumes associated with an exported target
* Connect to an iSCSI server and list volumes associated with an exported target
* List logical volumes in an LVM volume group, and allocate new LVM logical volumes
* List logical volumes in an LVM volume group, and allocate new LVM logical volumes
* Automatically assign correct SELinux security context label (<code>virt_image_t<code>) to all volumes when associating with a guest.
* Automatically assign correct SELinux security context label (<code>virt_image_t</code>) to all volumes when associating with a guest.


For further details refer to:
For further details refer to:

Revision as of 19:08, 4 November 2008

Virtualization

Virtualization in Fedora 10 includes major changes, and new features, that continue to support KVM, Xen, and many other virtual machine platforms.

Unified Kernel Image

The kernel-xen package has been obsoleted by the integration of paravirtualization operations in the upstream kernel. The kernel package in Fedora 10 supports booting as a guest domU, but will not function as a dom0 until such support is provided upstream. The most recent Fedora release with dom0 support is Fedora 8.

Booting a Xen domU guest within a Fedora 10 host requires the KVM based xenner. Xenner runs the guest kernel and a small Xen emulator together as a KVM guest.

Important.png
KVM requires hardware virtualization features in the host system.
Systems lacking hardware virtualization do not support Xen guests at this time.

For more information refer to:

Virtualization Storage Management

Advances in libvirt now provide the ability to list, create, and delete storage volumes on remote hosts. This includes the ability to create raw sparse and non-sparse files in a directory, allocate LVM logical volumes, partition physical disks, and attach to iSCSI targets.

This enables the virt-manager tool to remotely provision new guest domains, and manage the storage associated with them. It provides improved SELinux integration, since the APIs ensure that all storage volumes have the correct SELinux security context when being assigned to a guest.

Features

  • List storage volumes in a directory, and allocate new volumes, raw files both sparse and non-sparse, and formats supported by qemu-img (cow, qcow, qcow2, vmdk, etc)
  • List partitions in a disk, and allocate new partitions from free space
  • Connect to an iSCSI server and list volumes associated with an exported target
  • List logical volumes in an LVM volume group, and allocate new LVM logical volumes
  • Automatically assign correct SELinux security context label (virt_image_t) to all volumes when associating with a guest.

For further details refer to:

Remote Installation of Virtual Machines

Improvements in Virtualization Storage Management have enabled the creation of guests on remote host systems. By leveraging Avahi, systems supporting libvirt can be automatically detected by virt-manager. Upon detection guests can be provisioned on the remote system.

Installations can be automated with the help of cobbler and koan. Cobbler is a Linux installation server that allows for rapid setup of network installation environments. Network installs can be configured for PXE boot, reinstallations, media-based net-installs, and virtualized guest installs. Cobbler uses a helper program, koan, for reinstallation and virtualization support.

For further details refer to:

Other Improvements

Fedora also includes the following virtualization improvements:

  • Utilities in the new virt-mem package provide access to process tables, interface information, dmesg, and uname of QEmu and KVM guests from the host system. http://et.redhat.com/~rjones/virt-mem/
Note.png
virt-mem is experimental.
Only 32 bit guests are supported at this time.

libvirt Updated to 0.4.6

The libvirt package provides an API and tools to interact with the virtualization capabilities of recent versions of Linux (and other OSes). The libvirt software is designed to be a common denominator among all virtualization technologies with support for the following:

  • The Xen hypervisor on Linux and Solaris hosts.
  • The QEMU emulator
  • The KVM Linux hypervisor
  • The LXC Linux container system
  • The OpenVZ Linux container system
  • Storage on IDE/SCSI/USB disks, FibreChannel, LVM, iSCSI, and NFS

New features and improvements since 0.4.2:

  • Enhanced OpenVZ support
  • Enhanced Linux containers (LXC) support
  • Storage pools API
  • Improved iSCSI support
  • USB device passthrough for QEMU and KVM
  • Sound, serial, and parallel device support for QEMU and Xen
  • Support for NUMA and vCPU pinning in QEMU
  • Unified XML domain and network parsing for all virtualization drivers

For further details refer to:

http://www.libvirt.org/news.html

virt-manager Updated to 0.6.0

The virt-manager package provides a GUI implementation of virtinst and libvirt functionality.

New features and improvements since 0.5.4:

  • Remote storage management and provisioning: view, add, remove, and provision libvirt managed storage. Attach managed storage to a remote VM.
  • Remote VM installation support: Install from managed media (CDROM) or PXE. Simple install time storage provisioning.
  • VM details and console windows merged: each VM is now represented by a single tabbed window.
  • Use Avahi to list libvirtd instances on network.
  • Hypervisor Autoconnect: Option to connect to hypervisor at virt-manager start up.
  • Option to add sound device emulation when creating new guests.
  • Virtio and USB options when adding a disk device.
  • Allow viewing and removing VM sound, serial, parallel, and console devices.
  • Allow specifying a keymap when adding display device.
  • Keep app running if manager window is closed but VM window is still open.
  • Allow limiting the amount of stored stats history.

For further details refer to:

http://virt-manager.et.redhat.com/

virtinst Updated to 0.400.0

The python-virtinst package contains tools for installing and manipulating multiple VM guest image formats.

New features and improvements since 0.300.3:

  • New tool virt-convert: Allows converting between different types of virt configuration files. Currently only supports vmx to virt-image.
  • New tool virt-pack: Converts virt-image xml format to vmx and packs in a tar.gz. (Note this will likely be merged with virt-convert in the future).
  • virt-install improvements:
    • Support for remote VM installation. Can use install media and disk images on remote host if shared via libvirt. Allows provisioning storage on remote pools.
    • Support setting CPU pinning information for QEmu/KVM VMs
    • NUMA support via --cpuset=auto option
    • New options:
--wait allows putting a hard time limit on installs
--sound create VM with soundcard emulation
--disk allows specifying media as a path, storage volume, or a pool to provision storage on, device type, and several other options. Deprecates --file, --size, --nonsparse.
--prompt Input prompting is no longer the default, this option turns it back on.
  • virt-image improvements:
    • --replace option to overwrite existing VM image file
    • Support multiple network interfaces in virt-image format
  • Use virtio disk/net drivers if chosen guest OS entry supports it (Fedora 9 and 10)

For further details refer to:

Xen Updated to 3.3.0

Fedora 10 supports booting as a guest domU, but will not function as a dom0 until such support is provided in the upstream kernel. Support for a pv_ops dom0 is targeted for Xen 3.4.

Changes since 3.2.0:

  • Power management (P & C states) in the hypervisor
  • HVM emulation domains (qemu-on-minios) for better scalability, performance, and security
  • PVGrub: boot PV kernels using real GRUB inside the PV domain
  • Better PV performance: domain lock removed from pagetable-update paths
  • Shadow3: optimisations to make this the best shadow pagetable algorithm yet, making HVM performance better than ever
  • Hardware Assisted Paging enhancements: 2MB page support for better TLB locality
  • CPUID feature levelling: allows safe domain migration across systems with different CPU models
  • PVSCSI drivers for SCSI access direct into PV guests
  • HVM framebuffer optimisations: scan for framebuffer updates more efficiently
  • Device passthrough enhancements
  • Full x86 real-mode emulation for HVM guests on Intel VT: supports a much wider range of legacy guest OSes
  • New qemu merge with upstream development
  • Many other changes in both x86 and IA64 ports

For further details refer to: