linux/Documentation/virtual/kvm/locking.txt
<<
>>
Prefs
   1KVM Lock Overview
   2=================
   3
   41. Acquisition Orders
   5---------------------
   6
   7(to be written)
   8
   92: Exception
  10------------
  11
  12Fast page fault:
  13
  14Fast page fault is the fast path which fixes the guest page fault out of
  15the mmu-lock on x86. Currently, the page fault can be fast only if the
  16shadow page table is present and it is caused by write-protect, that means
  17we just need change the W bit of the spte.
  18
  19What we use to avoid all the race is the SPTE_HOST_WRITEABLE bit and
  20SPTE_MMU_WRITEABLE bit on the spte:
  21- SPTE_HOST_WRITEABLE means the gfn is writable on host.
  22- SPTE_MMU_WRITEABLE means the gfn is writable on mmu. The bit is set when
  23  the gfn is writable on guest mmu and it is not write-protected by shadow
  24  page write-protection.
  25
  26On fast page fault path, we will use cmpxchg to atomically set the spte W
  27bit if spte.SPTE_HOST_WRITEABLE = 1 and spte.SPTE_WRITE_PROTECT = 1, this
  28is safe because whenever changing these bits can be detected by cmpxchg.
  29
  30But we need carefully check these cases:
  311): The mapping from gfn to pfn
  32The mapping from gfn to pfn may be changed since we can only ensure the pfn
  33is not changed during cmpxchg. This is a ABA problem, for example, below case
  34will happen:
  35
  36At the beginning:
  37gpte = gfn1
  38gfn1 is mapped to pfn1 on host
  39spte is the shadow page table entry corresponding with gpte and
  40spte = pfn1
  41
  42   VCPU 0                           VCPU0
  43on fast page fault path:
  44
  45   old_spte = *spte;
  46                                 pfn1 is swapped out:
  47                                    spte = 0;
  48
  49                                 pfn1 is re-alloced for gfn2.
  50
  51                                 gpte is changed to point to
  52                                 gfn2 by the guest:
  53                                    spte = pfn1;
  54
  55   if (cmpxchg(spte, old_spte, old_spte+W)
  56        mark_page_dirty(vcpu->kvm, gfn1)
  57             OOPS!!!
  58
  59We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap.
  60
  61For direct sp, we can easily avoid it since the spte of direct sp is fixed
  62to gfn. For indirect sp, before we do cmpxchg, we call gfn_to_pfn_atomic()
  63to pin gfn to pfn, because after gfn_to_pfn_atomic():
  64- We have held the refcount of pfn that means the pfn can not be freed and
  65  be reused for another gfn.
  66- The pfn is writable that means it can not be shared between different gfns
  67  by KSM.
  68
  69Then, we can ensure the dirty bitmaps is correctly set for a gfn.
  70
  71Currently, to simplify the whole things, we disable fast page fault for
  72indirect shadow page.
  73
  742): Dirty bit tracking
  75In the origin code, the spte can be fast updated (non-atomically) if the
  76spte is read-only and the Accessed bit has already been set since the
  77Accessed bit and Dirty bit can not be lost.
  78
  79But it is not true after fast page fault since the spte can be marked
  80writable between reading spte and updating spte. Like below case:
  81
  82At the beginning:
  83spte.W = 0
  84spte.Accessed = 1
  85
  86   VCPU 0                                       VCPU0
  87In mmu_spte_clear_track_bits():
  88
  89   old_spte = *spte;
  90
  91   /* 'if' condition is satisfied. */
  92   if (old_spte.Accssed == 1 &&
  93        old_spte.W == 0)
  94      spte = 0ull;
  95                                         on fast page fault path:
  96                                             spte.W = 1
  97                                         memory write on the spte:
  98                                             spte.Dirty = 1
  99
 100
 101   else
 102      old_spte = xchg(spte, 0ull)
 103
 104
 105   if (old_spte.Accssed == 1)
 106      kvm_set_pfn_accessed(spte.pfn);
 107   if (old_spte.Dirty == 1)
 108      kvm_set_pfn_dirty(spte.pfn);
 109      OOPS!!!
 110
 111The Dirty bit is lost in this case.
 112
 113In order to avoid this kind of issue, we always treat the spte as "volatile"
 114if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means,
 115the spte is always atomicly updated in this case.
 116
 1173): flush tlbs due to spte updated
 118If the spte is updated from writable to readonly, we should flush all TLBs,
 119otherwise rmap_write_protect will find a read-only spte, even though the
 120writable spte might be cached on a CPU's TLB.
 121
 122As mentioned before, the spte can be updated to writable out of mmu-lock on
 123fast page fault path, in order to easily audit the path, we see if TLBs need
 124be flushed caused by this reason in mmu_spte_update() since this is a common
 125function to update spte (present -> present).
 126
 127Since the spte is "volatile" if it can be updated out of mmu-lock, we always
 128atomicly update the spte, the race caused by fast page fault can be avoided,
 129See the comments in spte_has_volatile_bits() and mmu_spte_update().
 130
 1313. Reference
 132------------
 133
 134Name:           kvm_lock
 135Type:           raw_spinlock
 136Arch:           any
 137Protects:       - vm_list
 138                - hardware virtualization enable/disable
 139Comment:        'raw' because hardware enabling/disabling must be atomic /wrt
 140                migration.
 141
 142Name:           kvm_arch::tsc_write_lock
 143Type:           raw_spinlock
 144Arch:           x86
 145Protects:       - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset}
 146                - tsc offset in vmcb
 147Comment:        'raw' because updating the tsc offsets must not be preempted.
 148
 149Name:           kvm->mmu_lock
 150Type:           spinlock_t
 151Arch:           any
 152Protects:       -shadow page/shadow tlb entry
 153Comment:        it is a spinlock since it is used in mmu notifier.
 154
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.