From Fedora Project Wiki
(punt to fedora 13)
Line 26: Line 26:


The work is all upstream in the kernel and qemu.  Guest code is already
The work is all upstream in the kernel and qemu.  Guest code is already
upstream. Host/qemu work is in progress.  For Fedora 12 will likely have to
upstream. Host/qemu work is in progress.  For Fedora 13 will likely have to
backport some of it.                                                       
backport some of it.                                                       


Milestones:
Milestones:
Reached:


- Guest Kernel:
- Guest Kernel:
Line 39: Line 41:
   socket polling                     
   socket polling                     
   virtio transport with copy from/to user
   virtio transport with copy from/to user
  <- at this point can be used in production the rest are optimizations
    we will most likely need                                         
  mergeable buffers
   TX credits using destructor (or: poll device status)
   TX credits using destructor (or: poll device status)
   TSO/GSO                                             
   TSO/GSO                                             
  pin memory with get user pages                     
   profile and tune                                     
   profile and tune                                     


- qemu:
- qemu:
   MSI-X support in virtio net
   MSI-X support in virtio net
  raw sockets support in qemu, promisc mode
   connect to kernel backend with MSI-X     
   connect to kernel backend with MSI-X     
  migration                               
   PCI interrupts emulation                 
   PCI interrupts emulation                 
  TSO/GSO       
  profile and tune


  <- at this point can be used in production
In progress:
    the rest are optimizations we will most likely need
- finalize qemu command line
- qemu: migration                               


  programming MAC
Code posted, but won't be upstream in time and probably not important enough to backport
   TSO/GSO       
   raw sockets support in qemu, promisc mode
  profile and tune


Delayed, will likely not make it by F13 ship date
  mergeable buffers
  programming MAC/vlan filtering


== Test Plan ==
== Test Plan ==

Revision as of 20:10, 12 January 2010

Enable kernel acceleration for kvm networking

Summary

Enable kernel acceleration for kvm networking

Owner

Current status

  • Targeted release: Fedora 13
  • Last updated: 2009-07-22
  • Percentage of completion: 20%

Detailed Description

vhost net moves the task of converting virtio descriptors to skbs and back from qemu userspace to the kernel driver.

Benefit to Fedora

Using a kernel module reduces latency and improves packets per second for small packets.


Scope

The work is all upstream in the kernel and qemu. Guest code is already upstream. Host/qemu work is in progress. For Fedora 13 will likely have to backport some of it.

Milestones:

Reached:

- Guest Kernel:

 MSI-X support in virtio net

- Host Kernel:

 iosignalfd, irqfd, eventfd polling
 finalize kernel/user interface    
 socket polling                    
 virtio transport with copy from/to user
 TX credits using destructor (or: poll device status)
 TSO/GSO                                             
 profile and tune                                    

- qemu:

 MSI-X support in virtio net
 connect to kernel backend with MSI-X     
 PCI interrupts emulation                 
 TSO/GSO        
 profile and tune

In progress: - finalize qemu command line - qemu: migration

Code posted, but won't be upstream in time and probably not important enough to backport

 raw sockets support in qemu, promisc mode

Delayed, will likely not make it by F13 ship date

 mergeable buffers
 programming MAC/vlan filtering

Test Plan

Guest:

  • WHQL networking tests

Networking:

  • Various MTU sizes
  • Broadcasts, multicasts,
  • Ethtool
  • Latency tests
  • Bandwidth tests
  • UDP testing
  • Guest to guest communication
  • More types of protocol testing
  • Guest vlans
  • Tests combination of multiple vnics on the guests
  • With/without {IP|TCP|UDP} offload

Virtualization:

  • Live migration

Kernel side:

  • Load/unload driver

User Experience

Users should see faster networking at least in cases of SRIOV or a dedicated per-guest network device.

Dependencies

  • kernel acceleration is implemented in the kernel rpm and depends on changes in qemu-kvm to work correctly.

Contingency Plan

  • We don't turn it on by default if it turns out to be unstable.

Documentation

Release Notes

Comments and Discussion