summaryrefslogtreecommitdiffstats
path: root/migration
Commit message (Collapse)AuthorAgeFilesLines
* migration/ram.c: do not set 'postcopy_running' in POSTCOPY_INCOMING_ENDDaniel Henrique Barboza2017-11-221-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When migrating a VM with 'migrate_set_capability postcopy-ram on' a postcopy_state is set during the process, ending up with the state POSTCOPY_INCOMING_END when the migration is over. This postcopy_state is taken into account inside ram_load to check how it will load the memory pages. This same ram_load is called when in a loadvm command. Inside ram_load, the logic to see if we're at postcopy_running state is: postcopy_running = postcopy_state_get() >= POSTCOPY_INCOMING_LISTENING postcopy_state_get() returns this enum type: typedef enum { POSTCOPY_INCOMING_NONE = 0, POSTCOPY_INCOMING_ADVISE, POSTCOPY_INCOMING_DISCARD, POSTCOPY_INCOMING_LISTENING, POSTCOPY_INCOMING_RUNNING, POSTCOPY_INCOMING_END } PostcopyState; In the case where ram_load is executed and postcopy_state is POSTCOPY_INCOMING_END, postcopy_running will be set to 'true' and ram_load will behave like a postcopy is in progress. This scenario isn't achievable in a migration but it is reproducible when executing savevm/loadvm after migrating with 'postcopy-ram on', causing loadvm to fail with Error -22: Source: (qemu) migrate_set_capability postcopy-ram on (qemu) migrate tcp:127.0.0.1:4444 Dest: (qemu) migrate_set_capability postcopy-ram on (qemu) ubuntu1704-intel login: Ubuntu 17.04 ubuntu1704-intel ttyS0 ubuntu1704-intel login: (qemu) (qemu) savevm test1 (qemu) loadvm test1 Unknown combination of migration flags: 0x4 (postcopy mode) error while loading state for instance 0x0 of device 'ram' Error -22 while loading VM state (qemu) This patch fixes this problem by changing the existing logic for postcopy_advised and postcopy_running in ram_load, making them 'false' if we're at POSTCOPY_INCOMING_END state. Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> CC: Juan Quintela <quintela@redhat.com> CC: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reported-by: Balamuruhan S <bala24@linux.vnet.ibm.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration, xen: Fix block image lock issue on live migrationAnthony PERARD2017-11-211-1/+22
| | | | | | | | | | | | | | | | | | | | | | When doing a live migration of a Xen guest with libxl, the images for block devices are locked by the original QEMU process, and this prevent the QEMU at the destination to take the lock and the migration fail. >From QEMU point of view, once the RAM of a domain is migrated, there is two QMP commands, "stop" then "xen-save-devices-state", at which point a new QEMU is spawned at the destination. Release locks in "xen-save-devices-state" so the destination can takes them, if it's a live migration. This patch add the "live" parameter to "xen-save-devices-state" which default to true so older version of libxenlight can work with newer version of QEMU. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* block: Add errp to bdrv_all_goto_snapshot()Kevin Wolf2017-11-211-3/+3
| | | | | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Denis V. Lunev <den@openvz.org> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: John Snow <jsnow@redhat.com>
* block: Make bdrv_next() keep strong referencesMax Reitz2017-11-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On one hand, it is a good idea for bdrv_next() to return a strong reference because ideally nearly every pointer should be refcounted. This fixes intermittent failure of iotest 194. On the other, it is absolutely necessary for bdrv_next() itself to keep a strong reference to both the BB (in its first phase) and the BDS (at least in the second phase) because when called the next time, it will dereference those objects to get a link to the next one. Therefore, it needs these objects to stay around until then. Just storing the pointer to the next in the iterator is not really viable because that pointer might become invalid as well. Both arguments taken together means we should probably just invoke bdrv_ref() and blk_ref() in bdrv_next(). This means we have to assert that bdrv_next() is always called from the main loop, but that was probably necessary already before this patch and judging from the callers, it also looks to actually be the case. Keeping these strong references means however that callers need to give them up if they decide to abort the iteration early. They can do so through the new bdrv_next_cleanup() function. Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20171110172545.32609-1-mreitz@redhat.com Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* migration: Make xbzrle_cache_size a migration parameterJuan Quintela2017-10-293-19/+32
| | | | | | | | | | Right now it is a variable in MigrationState instead of a MigrationParameter. The change allows to set it as the rest of the Migration parameters, from the command line, with query_migration_paramters, set_migrate_parameters, etc. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migration: No need to return the size of the cacheJuan Quintela2017-10-293-11/+7Star
| | | | | | | | | | | | | | After the previous commits, we make sure that the value passed is right, or we just drop an error. So now we return if there is one error or we have setup correctly the value passed. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> -- Improve error messasge Return 0 always for success
* migration: Don't play games with the requested cache sizeJuan Quintela2017-10-291-5/+6
| | | | | | | | | Now that we check that the value passed is a power of 2, we don't need to play games when comparing what is the size that is going to take the cache. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migration: Make sure that we pass the right cache sizeJuan Quintela2017-10-291-5/+7
| | | | | | | | Instead of passing silently round down the number of pages, make it an error that the cache size is not a power of 2. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migration: Improve migration thread error handlingJuan Quintela2017-10-233-5/+22
| | | | | | | | | We now report errors also when we finish migration, not only on info migrate. We plan to use this error from several places, and we want the first error to happen to win, so we add an mutex to order it. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migration: add bitmap for received pageAlexey Perevalov2017-10-233-5/+57
| | | | | | | | | | | | | | | | | | | | | | | | | This patch adds ability to track down already received pages, it's necessary for calculation vCPU block time in postcopy migration feature, and for recovery after postcopy migration failure. Also it's necessary to solve shared memory issue in postcopy livemigration. Information about received pages will be transferred to the software virtual bridge (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for already received pages. fallocate syscall is required for remmaped shared memory, due to remmaping itself blocks ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT error (struct page is exists after remmap). Bitmap is placed into RAMBlock as another postcopy/precopy related bitmaps. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: introduce qemu_ufd_copy_ioctl helperAlexey Perevalov2017-10-231-13/+21
| | | | | | | | | | | | Just for placing auxilary operations inside helper, auxilary operations like: track received pages, notify about copying operation in futher patches. Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: postcopy_place_page factoring outAlexey Perevalov2017-10-233-10/+11
| | | | | | | | | | | Need to mark copied pages as closer as possible to the place where it tracks down. That will be necessary in futher patch. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: new ram_init_bitmaps()Peter Xu2017-10-231-21/+30
| | | | | | | | | | Rearrange the bitmap initialization and the first sync. Since at it, make sure the locks are taken/released in correct order (I moved RCU unlock upper - though it may not affect much). Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: clean up xbzrle cache init/destroyPeter Xu2017-10-231-46/+77
| | | | | | | | | | | | | | | | Let's further simplify ram_init_all() and ram_save_cleanup() by abstract all the XBZRLE related codes into their own functions. When allocating xbzrle cache, we are always very careful on -ENOMEM; which makes sense. Replacing the last g_malloc0() with g_try_malloc0(), then refactor the logic a bit. This patch should be fixing some memory leaks when some memory allocation failed for XBZRLE in the past. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: provide ram_state_cleanupPeter Xu2017-10-231-3/+10
| | | | | | | | | | | There are two Mutexes that are created but not yet destroyed for RAMState. Fix that. Since we are at it, provide helper function to clean up RAMState. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: provide ram_state_init()Peter Xu2017-10-231-10/+26
| | | | | | | | | | The old ram_state_init() is not really initializing the RAMState only, but including lots of other stuff that is RAM-related. Renaming it to ram_init_all(). Instead, provide a real ram_state_init(). Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: pause-before-switchover for postcopyDr. David Alan Gilbert2017-10-231-7/+22
| | | | | | | | | | | Add pause-before-switchover support for postcopy. After starting postcopy it will transition active->pre-switchover->postcopy_active Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: allow cancel to unpauseDr. David Alan Gilbert2017-10-231-0/+4
| | | | | | | | | | If a migration_cancel is issued during the new paused state, kick the pause_sem to get to unpause so it can cancel. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: migrate-continueDr. David Alan Gilbert2017-10-231-0/+11
| | | | | | | | | | A new qmp command allows the caller to continue from a given paused state. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: Wait for semaphore before completing migrationDr. David Alan Gilbert2017-10-232-0/+41
| | | | | | | | | | Wait for a semaphore before completing the migration, if the previously added capability was enabled. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: Add 'pre-switchover' and 'device' statusesDr. David Alan Gilbert2017-10-231-0/+6
| | | | | | | | | | | | | | | Add two statuses for use when the 'pause-before-switchover' capability is enabled. 'pre-switchover' is the state that we wait in for management to allow us to continue. 'device' is the state we enter while serialising the devices after management gives us the OK. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: Add 'pause-before-switchover' capabilityDr. David Alan Gilbert2017-10-232-0/+11
| | | | | | | | | | | | | When 'pause-before-switchover' is enabled, the outgoing migration will pause before invalidating the block devices and serializing the device state. At this point the management layer gets the chance to clean up any device jobs or other device users before the migration completes. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: Make cache_init() take an error parameterJuan Quintela2017-10-233-23/+19Star
| | | | | | | | | | | | | Once there, take a total size instead of the size of the pages. We move the check that the new_size is bigger than one page from xbzrle_cache_resize(). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> -- Fix typo spotted by Peter Xu
* migration: Move xbzrle cache resize error handling to xbzrle_cache_resizeJuan Quintela2017-10-233-20/+22
| | | | | Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com>
* migration: Make cache size elements use the right typesJuan Quintela2017-10-232-5/+5
| | | | | Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migratiom: Remove max_item_age parameterJuan Quintela2017-10-231-2/+0Star
| | | | | | | | | | | It was not used at all since commit: 27af7d6ea5015e5ef1f7985eab94a8a218267a2b which replaced its use by the dirty sync count. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com>
* migration: Fix migrate_test_apply for multifd parametersJuan Quintela2017-10-231-0/+6
| | | | | | | They were missing when introduced on the tree Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com>
* dirty-bitmap: Change bdrv_[re]set_dirty_bitmap() to use bytesEric Blake2017-10-061-2/+5
| | | | | | | | | | | | | | | | | | Some of the callers were already scaling bytes to sectors; others can be easily converted to pass byte offsets, all in our shift towards a consistent byte interface everywhere. Making the change will also make it easier to write the hold-out callers to use byte rather than sectors for their iterations; it also makes it easier for a future dirty-bitmap patch to offload scaling over to the internal hbitmap. Although all callers happen to pass sector-aligned values, make the internal scaling robust to any sub-sector requests. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* dirty-bitmap: Change bdrv_get_dirty_locked() to take bytesEric Blake2017-10-061-1/+2
| | | | | | | | | | | | | | | | | | | Half the callers were already scaling bytes to sectors; the other half can eventually be simplified to use byte iteration. Both callers were already using the result as a bool, so make that explicit. Making the change also makes it easier for a future dirty-bitmap patch to offload scaling over to the internal hbitmap. Remember, asking whether a byte is dirty is effectively asking whether the entire granularity containing the byte is dirty, since we only track dirtiness by granularity. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* dirty-bitmap: Change bdrv_get_dirty_count() to report bytesEric Blake2017-10-061-1/+1
| | | | | | | | | | | Thanks to recent cleanups, all callers were scaling a return value of sectors into bytes; do the scaling internally instead. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* migration: Route more error pathsDr. David Alan Gilbert2017-09-271-3/+8
| | | | | | | | | | | | | | vmstate_save_state is called in lots of places. Route error returns from the easier cases back up; there are lots of more complex cases where their own error paths need fixing. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170925112917.21340-7-dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Commit message fix up as Peter's review
* migration: Route errors up through vmstate_saveDr. David Alan Gilbert2017-09-271-5/+14
| | | | | | | | | | | | | Route the errors from vsmtate_save_state back up through vmstate_save and out to the normal device state path. That's the normal error path done. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170925112917.21340-6-dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migration: wire vmstate_save_state errors up to vmstate_subsection_saveDr. David Alan Gilbert2017-09-271-8/+12
| | | | | | | | | | | | | Route the errors from vmstate_save_state up through vmstate_subsection_save (and back down, all rather recursive). Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170925112917.21340-5-dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Commit message fixed up as per Peter's review
* migration: Check field save returnsDr. David Alan Gilbert2017-09-271-3/+12
| | | | | | | | | | | | Check the return values from vmstate_save_state for fields and also the return values from 'put' for fields that use that. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170925112917.21340-4-dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migration: check pre_save return in vmstate_save_stateDr. David Alan Gilbert2017-09-272-2/+11
| | | | | | | | | | | | Check the return value of pre_save state and fail vmstate_save_state if the pre_save failed. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170925112917.21340-3-dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migration: pre_save return intDr. David Alan Gilbert2017-09-273-3/+9
| | | | | | | | | | | | | | | | | | | Modify the pre_save method on VMStateDescription to return an int rather than void so that it potentially can fail. Changed zillions of devices to make them return 0; the only case I've made it return non-0 is hw/intc/s390_flic_kvm.c that already had an error_report/return case. Note: If you add an error exit in your pre_save you must emit an error_report to say why. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170925112917.21340-2-dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migration: disable auto-converge during bulk block migrationPeter Lieven2017-09-273-1/+17
| | | | | | | | | | | | | | | auto-converge and block migration currently do not play well together. During block migration the auto-converge logic detects that ram migration makes no progress and thus throttles down the vm until it nearly stalls completely. Avoid this by disabling the throttling logic during the bulk phase of the block migration. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1506421996-12513-1-git-send-email-pl@kamp.de> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* migration: split ufd_version_check onto receive/request features partAlexey Perevalov2017-09-221-6/+88
| | | | | | | | | | | | | | | | | This modification is necessary for userfault fd features which are required to be requested from userspace. UFFD_FEATURE_THREAD_ID is a one of such "on demand" feature, which will be introduced in the next patch. QEMU have to use separate userfault file descriptor, due to userfault context has internal state, and after first call of ioctl UFFD_API it changes its state to UFFD_STATE_RUNNING (in case of success), but kernel while handling ioctl UFFD_API expects UFFD_STATE_WAIT_API. So only one ioctl with UFFD_API is possible per ufd. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: fix hardcoded function name in error reportAlexey Perevalov2017-09-221-1/+1
| | | | | | | Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: pass MigrationIncomingState* into migration check functionsAlexey Perevalov2017-09-224-8/+9
| | | | | | | | | | That tiny refactoring is necessary to be able to set UFFD_FEATURE_THREAD_ID while requesting features, and then to create downtime context in case when kernel supports it. Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: split common postcopy out of ram postcopyVladimir Sementsov-Ogievskiy2017-09-223-22/+67
| | | | | | | | | Split common postcopy staff from ram postcopy staff. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: fix ram_save_pendingVladimir Sementsov-Ogievskiy2017-09-221-2/+6
| | | | | | | | | | | | Fill postcopy-able pending only if ram postcopy is enabled. It is necessary because of there will be other postcopy-able states and when ram postcopy is disabled, it should not spoil common postcopy related pending. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: add has_postcopy savevm handlerVladimir Sementsov-Ogievskiy2017-09-222-2/+10
| | | | | | | | | | | | | | | Now postcopy-able states are recognized by not NULL save_live_complete_postcopy handler. But when we have several different postcopy-able states, it is not convenient. Ram postcopy may be disabled, while some other postcopy enabled, in this case Ram state should behave as it is not postcopy-able. This patch add separate has_postcopy handler to specify behaviour of savevm state. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: Split migration_fd_process_incomingJuan Quintela2017-09-221-2/+12
| | | | | | | | | We need that on later patches. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
* migration: Create multifd migration threadsJuan Quintela2017-09-223-0/+233
| | | | | | | | | | | | | | | | | | | | | | | | Creation of the threads, nothing inside yet. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> -- Use pointers instead of long array names Move to use semaphores instead of conditions as paolo suggestion Put all the state inside one struct. Use a counter for the number of threads created. Needed during cancellation. Add error return to thread creation Add id field Rename functions to multifd_save/load_setup/cleanup Change recv parameters to a pointer to struct Change back to a struct Use Error * for _cleanup
* migration: Create x-multifd-page-count parameterJuan Quintela2017-09-222-0/+28
| | | | | | | | | | | | | | | Indicates how many pages we are going to send in each batch to a multifd thread. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> -- Be consistent with defaults and documentation Use new DEFINE_PROP_* Rename x-multifd-group to x-multifd-page-count
* migration: Create x-multifd-channels parameterJuan Quintela2017-09-222-0/+27
| | | | | | | | | | | | | | | | | Indicates the number of channels that we will create. By default we create 2 channels. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> -- Catch inconsistent defaults (eric). Improve comment stating that number of threads is the same than number of sockets Use new DEFIN_PROP_* Rename x-multifd-threads to x-multifd-threads
* migration: Add multifd capabilityJuan Quintela2017-09-222-0/+11
| | | | | | | | | | | Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com> -- Use new DEFINE_PROP
* migration: Create migration_has_all_channelsJuan Quintela2017-09-223-3/+20
| | | | | | | | This function allows us to decide when to close the listener socket. For now, we only need one connection. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
* migration: Add comments to channel functionsJuan Quintela2017-09-221-0/+15
| | | | | Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com>