| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
now cpu_mips_init() reimplements subset of cpu_generic_init()
tasks, so just drop it and use cpu_generic_init() directly.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: use internal.h instead of cpu.h]
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Register separate QOM types for each mips cpu model,
so it would be possible to reuse generic CPU creation
routines.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: use internal.h, use void* to hold cpu_def in MIPSCPUClass,
mark MIPSCPU abstract, address Eduardo Habkost review]
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes the order between cpu_mips_realize_env() and
cpu_exec_initfn(), but cpu_exec_initfn() don't have anything that
depends on cpu_mips_realize_env() being called first.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
|
|
|
|
|
|
|
|
|
|
| |
so it can be used in mips_cpu_realizefn() in the next commit
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
|
|
|
|
|
|
|
|
|
|
| |
no logical change, only code movement (and fix a comment typo).
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This timer is a required part of the MIPS32/MIPS64 System Control coprocessor
(CP0). Moving it with the other architecture related files will allow an opaque
use of CPUMIPSState* in the next commit (introduce "internal.h").
also remove it from 'user' targets, remove an unnecessary include.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
staging
qemu-sparc update
# gpg: Signature made Thu 21 Sep 2017 08:42:30 BST
# gpg: using RSA key 0x5BC2C56FAE0F321F
# gpg: Good signature from "Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>"
# Primary key fingerprint: CC62 1AB9 8E82 200D 915C C9C4 5BC2 C56F AE0F 321F
* remotes/mcayland/tags/qemu-sparc-signed:
sun4u: use sunhme as default on-board NIC
net: add Sun HME (Happy Meal Ethernet) on-board NIC
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
| |
| |
| |
| |
| | |
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Artyom Tarasenko <atar4qemu@gmail.com>
|
| |
| |
| |
| |
| |
| |
| | |
Enable it by default for the sparc64-softmmu configuration.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Acked-by: Artyom Tarasenko <atar4qemu@gmail.com>
|
|\ \
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
into staging
Xen 2017/09/20
# gpg: Signature made Thu 21 Sep 2017 03:20:02 BST
# gpg: using RSA key 0x894F8F4870E1AE90
# gpg: Good signature from "Stefano Stabellini <stefano.stabellini@eu.citrix.com>"
# gpg: aka "Stefano Stabellini <sstabellini@kernel.org>"
# Primary key fingerprint: D04E 33AB A51F 67BA 07D3 0AEA 894F 8F48 70E1 AE90
* remotes/sstabellini/tags/xen-20170920-tag:
xen/pt: allow QEMU to request MSI unmasking at bind time
xen-disk: use g_new0 to fix build
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When a MSI interrupt is bound to a guest using
xc_domain_update_msi_irq (XEN_DOMCTL_bind_pt_irq) the interrupt is
left masked by default.
This causes problems with guests that first configure interrupts and
clean the per-entry MSIX table mask bit and afterwards enable MSIX
globally. In such scenario the Xen internal msixtbl handlers would not
detect the unmasking of MSIX entries because vectors are not yet
registered since MSIX is not enabled, and vectors would be left
masked.
Introduce a new flag in the gflags field to signal Xen whether a MSI
interrupt should be unmasked after being bound.
This also requires to track the mask register for MSI interrupts, so
QEMU can also notify to Xen whether the MSI interrupt should be bound
masked or unmasked
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reported-by: Andreas Kinzler <hfp@posteo.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
|
|/
|
|
|
|
|
|
|
|
|
|
| |
g_malloc0_n is available since glib-2.24. To allow build with older glib
versions use the generic g_new0, which is already used in many other
places in the code.
Fixes commit 3284fad728 ("xen-disk: add support for multi-page shared rings")
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
These patches fix regressions in 2.10
# gpg: Signature made Wed 20 Sep 2017 07:51:07 BST
# gpg: using DSA key 0x02FC3AEB0101DBC2
# gpg: Good signature from "Greg Kurz <groug@kaod.org>"
# gpg: aka "Greg Kurz <groug@free.fr>"
# gpg: aka "Greg Kurz <gkurz@linux.vnet.ibm.com>"
# gpg: aka "Gregory Kurz (Groug) <groug@free.fr>"
# gpg: aka "[jpeg image of size 3330]"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2BD4 3B44 535E C0A7 9894 DBA2 02FC 3AEB 0101 DBC2
* remotes/gkurz/tags/for-upstream:
9pfs: check the size of transport buffer before marshaling
9pfs: fix name_to_path assertion in v9fs_complete_rename()
9pfs: fix readdir() for 9p2000.u
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
v9fs_do_readdir_with_stat() should check for a maximum buffer size
before an attempt to marshal gathered data. Otherwise, buffers assumed
as misconfigured and the transport would be broken.
The patch brings v9fs_do_readdir_with_stat() in conformity with
v9fs_do_readdir() behavior.
Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
[groug, regression caused my commit 8d37de41cab1 # 2.10]
Signed-off-by: Greg Kurz <groug@kaod.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The third parameter of v9fs_co_name_to_path() must not contain `/'
character.
The issue is most likely related to 9p2000.u protocol only.
Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
[groug, regression caused by commit f57f5878578a # 2.10]
Signed-off-by: Greg Kurz <groug@kaod.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If the client is using 9p2000.u, the following occurs:
$ cd ${virtfs_shared_dir}
$ mkdir -p a/b/c
$ ls a/b
ls: cannot access 'a/b/a': No such file or directory
ls: cannot access 'a/b/b': No such file or directory
a b c
instead of the expected:
$ ls a/b
c
This is a regression introduced by commit f57f5878578a;
local_name_to_path() now resolves ".." and "." in paths,
and v9fs_do_readdir_with_stat()->stat_to_v9stat() then
copies the basename of the resulting path to the response.
With the example above, this means that "." and ".." are
turned into "b" and "a" respectively...
stat_to_v9stat() currently assumes it is passed a full
canonicalized path and uses it to do two different things:
1) to pass it to v9fs_co_readlink() in case the file is a symbolic
link
2) to set the name field of the V9fsStat structure to the basename
part of the given path
It only has two users: v9fs_stat() and v9fs_do_readdir_with_stat().
v9fs_stat() really needs 1) and 2) to be performed since it starts
with the full canonicalized path stored in the fid. It is different
for v9fs_do_readdir_with_stat() though because the name we want to
put into the V9fsStat structure is the d_name field of the dirent
actually (ie, we want to keep the "." and ".." special names). So,
we only need 1) in this case.
This patch hence adds a basename argument to stat_to_v9stat(), to
be used to set the name field of the V9fsStat structure, and moves
the basename logic to v9fs_stat().
Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
(groug, renamed old name argument to path and updated changelog)
Signed-off-by: Greg Kurz <groug@kaod.org>
|
|\ \
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
'remotes/ehabkost/tags/machine-next-pull-request' into staging
Machine/CPU/NUMA queue, 2017-09-19
# gpg: Signature made Tue 19 Sep 2017 21:17:01 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/machine-next-pull-request:
MAINTAINERS: Update git URLs for my trees
hw/acpi-build: Fix SRAT memory building in case of node 0 without RAM
NUMA: Replace MAX_NODES with nb_numa_nodes in for loop
numa: cpu: calculate/set default node-ids after all -numa CLI options are parsed
arm: drop intermediate cpu_model -> cpu type parsing and use cpu type directly
pc: use generic cpu_model parsing
vl.c: convert cpu_model to cpu type and set of global properties before machine_init()
cpu: make cpu_generic_init() abort QEMU on error
qom: cpus: split cpu_generic_init() on feature parsing and cpu creation parts
hostmem-file: Add "discard-data" option
osdep: Define QEMU_MADV_REMOVE
vl: Clean up user-creatable objects when exiting
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
List the branches where I queue patches for Machine Core, NUMA,
Memory Backends, and X86. Update the NUMA section to list the
"machine-next" branch instead of "numa".
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170901153928.17058-1-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, Using the fisrt node without memory on the machine makes
QEMU unhappy. With this example command line:
... \
-m 1024M,slots=4,maxmem=32G \
-numa node,nodeid=0 \
-numa node,mem=1024M,nodeid=1 \
-numa node,nodeid=2 \
-numa node,nodeid=3 \
Guest reports "No NUMA configuration found" and the NUMA topology is
wrong.
This is because when QEMU builds ACPI SRAT, it regards node 0 as the
default node to deal with the memory hole(640K-1M). this means the
node0 must have some memory(>1M), but, actually it can have no
memory.
Fix this problem by cut out the 640K hole in the same way the PCI
4G hole does.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Message-Id: <1504231805-30957-2-git-send-email-douly.fnst@cn.fujitsu.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In QEMU, the number of the NUMA nodes is determined by parse_numa_opts().
Then, QEMU uses it for iteration, for example:
for (i = 0; i < nb_numa_nodes; i++)
However, in memory_region_allocate_system_memory(), it uses MAX_NODES
not nb_numa_nodes.
So, replace MAX_NODES with nb_numa_nodes to keep code consistency and
reduce the loop times.
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Message-Id: <1503387936-3483-1-git-send-email-douly.fnst@cn.fujitsu.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Calculating default node-ids for CPUs in possible_cpu_arch_ids()
is rather fragile since defaults calculation uses nb_numa_nodes but
callback might be potentially called early before all -numa CLI
options are parsed, which would lead to cpus assigned only upto
nb_numa_nodes at the time possible_cpu_arch_ids() is called.
Issue was introduced by
(7c88e65 numa: mirror cpu to node mapping in MachineState::possible_cpus)
and for example CLI:
-smp 4 -numa node,cpus=0 -numa node
would set props.node-id in possible_cpus array for every non
explicitly mapped CPU to the first node.
Issue is not visible to guest nor to mgmt interface due to
1) implictly mapped cpus are forced to the first node in
case of partial mapping
2) in case of default mapping possible_cpu_arch_ids() is
called after all -numa options are parsed (resulting
in correct mapping).
However it's fragile to rely on late execution of
possible_cpu_arch_ids(), therefore add machine specific
callback that returns node-id for CPU and use it to calculate/
set defaults at machine_numa_finish_init() time when all -numa
options are parsed.
Reported-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1496314408-163972-1-git-send-email-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
there are 2 use cases to deal with:
1: fixed CPU models per board/soc
2: boards with user configurable cpu_model and fallback to
default cpu_model if user hasn't specified one explicitly
For the 1st
drop intermediate cpu_model parsing and use const cpu type
directly, which replaces:
typename = object_class_get_name(
cpu_class_by_name(TYPE_ARM_CPU, cpu_model))
object_new(typename)
with
object_new(FOO_CPU_TYPE_NAME)
or
cpu_generic_init(BASE_CPU_TYPE, "my cpu model")
with
cpu_create(FOO_CPU_TYPE_NAME)
as result 1st use case doesn't have to invoke not necessary
translation and not needed code is removed.
For the 2nd
1: set default cpu type with MachineClass::default_cpu_type and
2: use generic cpu_model parsing that done before machine_init()
is run and:
2.1: drop custom cpu_model parsing where pattern is:
typename = object_class_get_name(
cpu_class_by_name(TYPE_ARM_CPU, cpu_model))
[parse_features(typename, cpu_model, &err) ]
2.2: or replace cpu_generic_init() which does what
2.1 does + create_cpu(typename) with just
create_cpu(machine->cpu_type)
as result cpu_name -> cpu_type translation is done using
generic machine code one including parsing optional features
if supported/present (removes a bunch of duplicated cpu_model
parsing code) and default cpu type is defined in an uniform way
within machine_class_init callbacks instead of adhoc places
in boadr's machine_init code.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1505318697-77161-6-git-send-email-imammedo@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
define default CPU type in generic way in pc_machine_class_init()
and let common machine code to handle cpu_model parsing
Patch also introduces TARGET_DEFAULT_CPU_TYPE define for 2 purposes:
* make foo_machine_class_init() look uniform on every target
* use define in [bsd|linux]-user targets to pick default
cpu type
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1505318697-77161-5-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
machine_init()
All machines that support user specified cpu_model either call
cpu_generic_init() or cpu_class_by_name()/CPUClass::parse_features
to parse feature string and to get CPU type to create.
Which leads to code duplication and hard-codding default CPU model
within machine_foo_init() code. Which makes it impossible to
get CPU type before machine_init() is run.
So instead of setting default CPUs models and doing parsing in
target specific machine_foo_init() in various ways, provide
a generic data driven cpu_model parsing before machine_init()
is called.
in follow up per target patches, it will allow to:
* define default CPU type in consistent/generic manner
per machine type and drop custom code that fallbacks
to default if cpu_model is NULL
* drop custom features parsing in targets and do it
in centralized way.
* for cases of
cpu_generic_init(TYPE_BASE/DEFAULT_CPU, "some_cpu")
replace it with
cpu_create(machine->cpu_type) || cpu_create(TYPE_FOO)
depending if CPU type is user settable or not.
not doing useless parsing and clearly documenting where
CPU model is user settable or fixed one.
Patch allows machine subclasses to define default CPU type
per machine class at class_init() time and if that is set
generic code will parse cpu_model into a MachineState::cpu_type
which will be used to create CPUs for that machine instance
and allows gradual per board conversion.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1505318697-77161-4-git-send-email-imammedo@redhat.com>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Almost every user of cpu_generic_init() checks for
returned NULL and then reports failure in a custom way
and aborts process.
Some users assume that call can't fail and don't check
for failure, though they should have checked for it.
In either cases cpu_generic_init() failure is fatal,
so instead of checking for failure and reporting
it various ways, make cpu_generic_init() report
errors in consistent way and terminate QEMU on failure.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1505318697-77161-3-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
it would allow to reuse feature parsing part in various machines
that have CPU features instead of re-implementing the same feature
parsing each time.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1505318697-77161-2-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The new option can be used to indicate that the file contents can
be destroyed and don't need to be flushed to disk when QEMU exits
or when the memory backend object is removed.
Internally, it will trigger a madvise(MADV_REMOVE) call when the
memory backend is removed.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170824192315.5897-4-ehabkost@redhat.com>
[ehabkost: fixup: improved documentation]
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Tested-by: Zack Cornelius <zack.cornelius@kove.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Define QEMU_MADV_REMOVE, so we can use it with qemu_madvise().
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170824192315.5897-3-ehabkost@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Tested-by: Zack Cornelius <zack.cornelius@kove.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Delete all user-creatable objects in /objects when exiting QEMU, so they
can perform cleanup actions.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170824192315.5897-2-ehabkost@redhat.com>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Zack Cornelius <zack.cornelius@kove.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
staging
Assorted s390x patches:
- introduce virtio-gpu-ccw, with virtio-gpu endian fixes
- lots of cleanup in the s390x code
- make device_add work for s390x cpus
- enable seccomp on s390x
- an ivshmem endian fix
- set the reserved DHCP client architecture id for netboot
- fixes in the css and pci support
# gpg: Signature made Tue 19 Sep 2017 17:39:45 BST
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>"
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# gpg: aka "Cornelia Huck <cohuck@kernel.org>"
# gpg: aka "Cornelia Huck <cohuck@redhat.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170919-v2: (38 commits)
MAINTAINERS/s390x: add terminal3270.c
virtio-ccw: Create a virtio gpu device for the ccw bus
virtio-gpu: Handle endian conversion
s390x/ccw: create s390 phb for compat reasons as well
configure: Allow --enable-seccomp on s390x, too
virtio-ccw: remove stale comments on endianness
s390x: allow CPU hotplug in random core-id order
s390x: generate sclp cpu information from possible_cpus
s390x: get rid of cpu_s390x_create()
s390x: get rid of cpu_states and use possible_cpus instead
s390x: implement query-hotpluggable-cpus
s390x: CPU hot unplug via device_del cannot work for now
s390x: allow cpu hotplug via device_add
s390x: print CPU definitions in sorted order
target/s390x: rename next_cpu_id to next_core_id
target/s390x: use "core-id" for cpu number/address/id handling
target/s390x: set cpu->id for linux user when realizing
s390x: allow only 1 CPU with TCG
target/s390x: use program_interrupt() in per_check_exception()
target/s390x: use trigger_pgm_exception() in s390_cpu_handle_mmu_fault()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20170918130455.144262-1-borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Wire up the virtio-gpu device for the CCW bus. The virtio-gpu
is a virtio-1 device, so disable revision 0.
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <6c53f939cf2d64b66d2a6878b29c9bf3820f3d5b.1505485574.git.alifm@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Virtio GPU code currently only supports litte endian format,
and so using the Virtio GPU device on a big endian machine
does not work.
Let's fix it by supporting the correct host cpu byte order.
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Message-Id: <dc748e15f36db808f90b4f2393bc29ba7556a9f6.1505485574.git.alifm@linux.vnet.ibm.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
d32bd032d8 ("s390x/ccw: create s390 phb conditionally") made
registering the s390 pci host bridge conditional on presense
of the zpci facility bit. Sadly, that breaks migration from
machines that did not use the cpu model (2.7 and previous).
Create the s390 phb for pre-cpu model machines as well: We can
tweak s390_has_feat() to always indicate the zpci facility bit
when no cpu model is available (on 2.7 and previous compat machines).
Fixes: d32bd032d8 ("s390x/ccw: create s390 phb conditionally")
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
libseccomp supports s390x since version 2.3.0, and I was able to start
a VM with "-sandbox on" without any obvious problems by using this patch,
so it should be safe to allow --enable-seccomp on s390x nowadays, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1505385363-27717-1-git-send-email-thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Eduardo Otubo <otubo@redhat.com>
Acked-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We have two stale comments suggesting one should think about virtio
config space endianness a bit longer. We have just done that, and came to
the conclusion we are fine as is: it's the responsibility of the virtio
device and not of the transport (and that is how it works now). Putting
the responsibility into the transport isn't even possible, because the
transport would have to know about the config space layout of each
device.
Let us remove the stale comments.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Suggested-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20170914105535.47941-1-pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
SCLP correctly indicates the core-id aka. CPU address for each available
CPU.
As the core-id corresponds to cpu_index, also a newly created kvm vcpu
gets assigned this core-id as vcpu id. So SIGP in the kernel works
correctly (it uses the vcpu id to lookup the correct CPU).
So there should be nothing hindering us from hotplugging CPUs in random
core-id order.
This now makes sure that the output from "query-hotpluggable-cpus"
is completely true. Until now, a specific order is implicit. Performance
vice, hotplugging CPUs in non-sequential order might not be the best thing
to do, as VCPU lookup inside KVM might be a little slower. But that
doesn't hinder us from supporting it.
next_core_id is now used by linux user only.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-23-david@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is the first step to allow hot plugging of CPUs in a non-sequential
order. If a cpu is available ("plugged") can directly be decided by
looking at the cpu state pointer.
This makes sure, that really only cpus attached to the machine are
reported.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-22-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Now that there is only one user of cpu_s390x_create() left, make cpu
creation look like on x86.
- Perform the model/properties split and checks in s390_init_cpus()
- Parse features only once without having to remember if already parsed
- Pass only the typename to s390x_new_cpu()
- Use the typename of an existing CPU for hotplug via cpu-add
Acked-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-21-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Now that we have possible_cpus, we can get rid of the global variable
and rewrite s390_cpu_addr2state() to use it.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-20-david@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
CPU hotplug is only possible on a per core basis on s390x. So let's
add possible_cpus and wire everything up properly.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-19-david@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
device_del on a CPU will currently do nothing. Let's emit an error
telling that this is will currently not work (there is no architecture
support on s390x). Error message copied from ppc.
(qemu) device_del cpu1
device_del cpu1
CPU hot unplug not supported on this machine
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-18-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
E.g. the following now works:
device_add host-s390-cpu,id=cpu1,core-id=1
The system will perform the same checks as when using cpu_add:
- If the core_id is already in use
- If the next sequential core_id isn't used
- If core-id >= max_cpu is specified
In addition, mixed CPU models are checked. E.g. if starting with
-cpu host and trying to hotplug "qemu-s390-cpu":
"Mixed CPU models are not supported on s390x."
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-17-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Other architectures provide nicely sorted lists, let's do it similarly on
s390x.
While at it, clean up the code we have to touch either way.
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-16-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Adapt to the new term "core_id". While at it, fix the type and drop the
initialization to 0 (which is superfluous).
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Some time ago we discussed that using "id" as property name is not the
right thing to do, as it is a reserved property for other devices and
will not work with device_add.
Switch to the term "core-id" instead, and use it as an equivalent to
"CPU address" mentioned in the PoP. There is no such thing as cpu number,
so rename env.cpu_num to env.core_id. We use "core-id" as this is the
common term to use for device_add later on (x86 and ppc).
We can get rid of cpu->id now. Keep cpu_index and env->core_id in sync.
cpu_index was already implicitly used by e.g. cpu_exists(), so keeping
both in sync seems to be the right thing to do.
cpu_index will now no longer automatically get set via
cpu_exec_realizefn(). For now, we were lucky that both implicitly stayed
in sync.
Our new cpu property "core-id" can be a static property. Range checks can
be avoided by using the correct type and the "setting after realized"
check is done implicitly.
device_add will later need the reserved "id" property. Hotplugging a CPU
on s390x will then be: "device_add host-s390-cpu,id=cpu2,core-id=2".
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
scc->next_cpu_id is updated when realizing. Setting it just before that
point looks cleaner.
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-13-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Specifying more than 1 CPU (e.g. -smp 5) leads to SIGP errors (the
guest tries to bring these CPUs up but fails), because we don't support
multiple CPUs on s390x under TCG.
Let's bail out if more than 1 is specified, so we don't raise people's
hope.
Tested-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-12-david@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Clean it up by reusing program_interrupt(). Add a concern regarding
ilen.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-11-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This looks cleaner. linux-user will not use the ilen field, so setting
it doesn't do any harm.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|