diff options
author | Eduardo Habkost | 2017-07-12 18:20:58 +0200 |
---|---|---|
committer | Eduardo Habkost | 2017-07-26 19:55:12 +0200 |
commit | bd1820227ecc0c77cc2aeba7c7c25b2d0a72ff3c (patch) | |
tree | fe13769930b945430cf0e9b96b664eaa38cbad4b /target | |
parent | target/i386: Define CPUID_MODEL_ID_SZ macro (diff) | |
download | qemu-bd1820227ecc0c77cc2aeba7c7c25b2d0a72ff3c.tar.gz qemu-bd1820227ecc0c77cc2aeba7c7c25b2d0a72ff3c.tar.xz qemu-bd1820227ecc0c77cc2aeba7c7c25b2d0a72ff3c.zip |
target/i386: Don't use x86_cpu_load_def() on "max" CPU model
When commit 0bacd8b3046f ('i386: Don't set CPUClass::cpu_def on
"max" model') removed the CPUClass::cpu_def field, we kept using
the x86_cpu_load_def() helper directly in max_x86_cpu_initfn(),
emulating the previous behavior when CPUClass::cpu_def was set.
However, x86_cpu_load_def() is intended to help initialization of
CPU models from the builtin_x86_defs table, and does lots of
other steps that are not necessary for "max".
One of the things x86_cpu_load_def() do is to set the properties
listed at tcg_default_props/kvm_default_props. We must not do
that on the "max" CPU model, otherwise under KVM we will
incorrectly report all KVM features as always available, and the
"svm" feature as always unavailable. The latter caused the bug
reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1467599
("Unable to start domain: the CPU is incompatible with host CPU:
Host CPU does not provide required features: svm")
Replace x86_cpu_load_def() with simple object_property_set*()
calls. In addition to fixing the above bug, this makes the KVM
branch in max_x86_cpu_initfn() very similar to the existing TCG
branch.
For reference, the full list of steps performed by
x86_cpu_load_def() is:
* Setting min-level and min-xlevel. Already done by
max_x86_cpu_initfn().
* Setting family/model/stepping/model-id. Done by the code added
to max_x86_cpu_initfn() in this patch.
* Copying def->features. Wrong because "-cpu max" features need to
be calculated at realize time. This was not a problem in the
current code because host_cpudef.features was all zeroes.
* x86_cpu_apply_props() calls. This causes the bug above, and
shouldn't be done.
* Setting CPUID_EXT_HYPERVISOR. Not needed because it is already
reported by x86_cpu_get_supported_feature_word(), and because
"-cpu max" features need to be calculated at realize time.
* Setting CPU vendor to host CPU vendor if on KVM mode.
Redundant, because max_x86_cpu_initfn() already sets it to the
host CPU vendor.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170712162058.10538-5-ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Diffstat (limited to 'target')
-rw-r--r-- | target/i386/cpu.c | 18 |
1 files changed, 12 insertions, 6 deletions
diff --git a/target/i386/cpu.c b/target/i386/cpu.c index 8558c600bf..ddc45abd70 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -1644,15 +1644,21 @@ static void max_x86_cpu_initfn(Object *obj) cpu->max_features = true; if (kvm_enabled()) { - X86CPUDefinition host_cpudef = { }; - uint32_t eax = 0, ebx = 0, ecx = 0, edx = 0; + char vendor[CPUID_VENDOR_SZ + 1] = { 0 }; + char model_id[CPUID_MODEL_ID_SZ + 1] = { 0 }; + int family, model, stepping; - host_vendor_fms(host_cpudef.vendor, &host_cpudef.family, - &host_cpudef.model, &host_cpudef.stepping); + host_vendor_fms(vendor, &family, &model, &stepping); - cpu_x86_fill_model_id(host_cpudef.model_id); + cpu_x86_fill_model_id(model_id); - x86_cpu_load_def(cpu, &host_cpudef, &error_abort); + object_property_set_str(OBJECT(cpu), vendor, "vendor", &error_abort); + object_property_set_int(OBJECT(cpu), family, "family", &error_abort); + object_property_set_int(OBJECT(cpu), model, "model", &error_abort); + object_property_set_int(OBJECT(cpu), stepping, "stepping", + &error_abort); + object_property_set_str(OBJECT(cpu), model_id, "model-id", + &error_abort); env->cpuid_min_level = kvm_arch_get_supported_cpuid(s, 0x0, 0, R_EAX); |