diff options
| author | Jay Zhou | 2017-07-28 12:28:53 +0200 |
|---|---|---|
| committer | Paolo Bonzini | 2017-08-01 17:27:33 +0200 |
| commit | 1931076077254a2886daa7c830c7838ebd1f81ef (patch) | |
| tree | 79cd172b4130defaf256155b99b8950234336408 /include/exec | |
| parent | qemu-options: document existance of versioned machine types (diff) | |
| download | qemu-1931076077254a2886daa7c830c7838ebd1f81ef.tar.gz qemu-1931076077254a2886daa7c830c7838ebd1f81ef.tar.xz qemu-1931076077254a2886daa7c830c7838ebd1f81ef.zip | |
migration: optimize the downtime
Qemu_savevm_state_cleanup takes about 300ms in my ram migration tests
with a 8U24G vm(20G is really occupied), the main cost comes from
KVM_SET_USER_MEMORY_REGION ioctl when mem.memory_size = 0 in
kvm_set_user_memory_region. In kmod, the main cost is
kvm_zap_obsolete_pages, which traverses the active_mmu_pages list to
zap the unsync sptes.
It can be optimized by delaying memory_global_dirty_log_stop to the next
vm_start.
Changes v2->v3:
- NULL VMChangeStateHandler if it is deleted and protect the scenario
of nested invocations of memory_global_dirty_log_start/stop [Paolo]
Changes v1->v2:
- create a VMChangeStateHandler in memory.c to reduce the coupling [Paolo]
Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
Message-Id: <1501237733-2736-1-git-send-email-jianjay.zhou@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'include/exec')
0 files changed, 0 insertions, 0 deletions
