When loading ROMs allow the caller to specify an AddressSpace to use for
the load.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 85f86b94ea94879e7ce8b12e85ac8de26658f7eb.1474331683.git.alistair.francis@xilinx.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If the caller didn't specify an architecture for the ELF machine
the load_elf() function will auto detect it based on the ELF file.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: f2d70b47fcad31445f947f8817a0e146d80a046b.1474331683.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Cadence GEM hardware allows incoming data to be 'screened' based on some
register values. Add support for these screens.
We also need to increase the max regs to avoid compilation failures. These new
registers are implemented in the next patch.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 73e69a8ad9fa2763e9f68f71eaf2469dd5744fcc.1469727764.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Cadence GEM hardware supports N number priority queues, this patch is a
step towards that by adding the property to set the queues. At the moment
behaviour doesn't change as we only use queue 0.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 6543ec0d0c4bfd2678d0ed683efb197e91b17733.1469727764.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some of the timer devices may behave differently from what ptimer
provides. Introduce ptimer policy feature that allows ptimer users to
change default and wrong timer behaviour, for example to continuously
trigger periodic timer when load value is equal to "0".
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Message-id: 994cd608ec392da6e58f0643800dda595edb9d97.1473252818.git.digetx@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Configure the size of the RAM of the SOC using a property to propagate
the value down to the memory controller from the board level.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Message-id: 1473438177-26079-14-git-send-email-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is no need to do this at each reset as the RAM size will not
change.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Message-id: 1473438177-26079-12-git-send-email-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Based on previous work done by Andrew Jeffery <andrew@aj.id.au>.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1473438177-26079-9-git-send-email-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This gives some explanation behind the magic number 0x120CE416.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1473438177-26079-8-git-send-email-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Let's define an object class for each Aspeed SoC we support. A
AspeedSoCInfo struct gathers the SoC specifications which can later be
used by an instance of the class or by a board using the SoC.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Message-id: 1473438177-26079-4-git-send-email-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is a name replacement to prepare ground for other SoCs.
Let's also remove the AST2400_SMC_BASE definition from the address
space mappings, as it is not used. This controller was removed from
the Aspeed SoC AST2500, so this provides us a better common base for
the address space mapping on both SoCs.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1473438177-26079-3-git-send-email-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Let's prepare for new Aspeed SoCs and rename the ast2400 file to a
more generic one. There are no changes in the code apart from the
header file include.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1473438177-26079-2-git-send-email-clg@kaod.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Extend the current module interface to allow for block drivers to be
loaded dynamically on request. The only block drivers that can be
converted into modules are the drivers that don't perform any init
operation except for registering themselves.
In addition, only the protocol drivers are being modularized, as they
are the only ones which see significant performance benefits. The format
drivers do not generally link to external libraries, so modularizing
them is of no benefit from a performance perspective.
All the necessary module information is located in a new structure found
in module_block.h
This spoils the purpose of 5505e8b76f (block/dmg: make it modular).
Before this patch, if module build is enabled, block-dmg.so is linked to
libbz2, whereas the main binary is not. In downstream, theoretically, it
means only the qemu-block-extra package depends on libbz2, while the
main QEMU package needn't to. With this patch, we (temporarily) change
the case so that the main QEMU depends on libbz2 again.
Signed-off-by: Marc Marí <markmb@redhat.com>
Signed-off-by: Colin Lord <clord@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1471008424-16465-4-git-send-email-clord@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
[mreitz: Do a signed comparison against the length of
block_driver_modules[], so it will not cause a compile error when
empty]
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently when timing the pbkdf algorithm a fixed key
size of 32 bytes is used. This results in inaccurate
timings for certain hashes depending on their digest
size. For example when using sha1 with aes-256, this
causes us to measure time for the master key digest
doing 2 sha1 operations per iteration, instead of 1.
Instead we should pass in the desired key size to the
timing routine that matches the key size that will be
used for real later.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The qcrypto_pbkdf_count_iters method uses a 64 bit int
but then checks its value against INT32_MAX before
returning it. This bounds check is premature, because
the calling code may well scale the iteration count
by some value. It is thus better to return a 64-bit
integer and let the caller do range checking.
For consistency the qcrypto_pbkdf method is also changed
to accept a 64bit int, though this is somewhat academic
since nettle is limited to taking an 'int' while gcrypt
is limited to taking a 'long int'.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
cpu model was merged with 2.8, it is wrong to abuse ri_allowed which
was enabled with 2.7.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The return address argument to the softmmu template helpers was
confused. In the legacy case, we wanted to indicate that there
is no return address, and so passed in NULL. However, we then
immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero
value, indicating the presence of an (invalid) return address.
Push the GETPC_ADJ subtraction down to the only point it's required:
immediately before use within cpu_restore_state_from_tb, after all
NULL pointer checks have been completed.
This makes GETPC and GETRA identical. Remove GETRA as the lesser
used macro, replacing all uses with GETPC.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Currently, devices are plugged before features are negotiated.
If the backend doesn't support VIRTIO_F_VERSION_1, the transport
needs to rewind some settings.
This is the case for CCW, for which a post_plugged callback had
been introduced, where max_rev field is just updated if
VIRTIO_F_VERSION_1 is not supported by the backend.
For PCI, implementing post_plugged would be much more
complicated, so it needs to know whether the backend supports
VIRTIO_F_VERSION_1 at plug time.
Currently, nothing is done for PCI. Modern capabilities get
exposed to the guest even if VIRTIO_F_VERSION_1 is not supported
by the backend, which confuses the guest.
This patch replaces existing post_plugged solution with an
approach that fits with both transports.
Features negotiation is performed before ->device_plugged() call.
A pre_plugged callback is introduced so that the transports can
set their supported features.
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: qemu-stable@nongnu.org
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com> [ccw]
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Unused function declarations were found using a simple gcc plugin and
manually verified by grepping the sources.
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
All operations that take a floatx80 as an operand need to have their
inputs checked for malformed encodings. In all of these cases, use the
function floatx80_invalid_encoding to perform the check. If an invalid
operand is found, raise an invalid operation exception, and then return
either NaN (for fp-typed results) or the integer indefinite value (the
minimum representable signed integer value, for int-typed results).
For the non-quiet comparison operations, this touches adjacent code in
order to pass style checks.
Signed-off-by: Andrew Dutcher <andrew@andrewdutcher.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1471392895-17324-1-git-send-email-andrew@andrewdutcher.com
[PMM: changed "1 << 63" to "1ULL << 63" to fix compile errors]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1472496380-19706-6-git-send-email-rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since the two users don't make use of the returned offset,
beyond ensuring that the entire buffer is zero, consider the
can_use_buffer_find_nonzero_offset and buffer_find_nonzero_offset
functions internal.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1472496380-19706-4-git-send-email-rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Load the LAPIC state during post_load (rather than when the CPU
starts).
This allows an interrupt to be delivered from the ioapic to
the lapic prior to cpu loading, in particular the RTC that starts
ticking as soon as we load it's state.
Fixes a case where Windows hangs after migration due to RTC interrupts
disappearing.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use the __atomic_*_n() primitives which take the value as argument. It
is not necessary to store the value locally before calling the
primitive, hence saving us a stack store and load.
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-Id: <20160829171701.14025-1-bobby.prani@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Remove the redundant barrier() after the fence as agreed in previous
discussion here:
https://lists.gnu.org/archive/html/qemu-devel/2016-04/msg00489.html
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-Id: <20160824204424.14041-3-bobby.prani@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
kvm_setup_guest_memory only does "madvise to QEMU_MADV_DONTFORK" and
is only called by ram_block_add, which actually is duplicate code.
Bonus: add simple comment for kvm_has_sync_mmu to make life easier.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Message-Id: <1473662096-32598-1-git-send-email-caoj.fnst@cn.fujitsu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The comments is outdated. The patch has following changes:
1. tense correction.
2. all clock time value is returned in nanoseconds, so, they are same in
precision.
3. virtual clock doesn't use cpu cycles.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Message-Id: <1469790338-28990-2-git-send-email-caoj.fnst@cn.fujitsu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When invalidating a translation block, set an invalid flag into the
TranslationBlock structure first. It is also necessary to check whether
the target TB is still valid after acquiring 'tb_lock' but before calling
tb_add_jump() since TB lookup is to be performed out of 'tb_lock' in
future. Note that we don't have to check 'last_tb'; an already invalidated
TB will not be executed anyway and it is thus safe to patch it.
Suggested-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
instead of accessing tqe_prev field dircetly outside
of queue.h use macros to check if element is in list
and make sure that afer element is removed from list
tqe_prev field could be used to do the same check.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1469450832-84343-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Right after main_loop ends, we release various things but keep iothread
alive. The latter is not prepared to the sudden change of resources.
Specifically, after bdrv_close_all(), virtio-scsi dataplane get a
surprise at the empty BlockBackend:
(gdb) bt
at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:543
at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:577
It is because the d->conf.blk->root is set to NULL, then
blk_get_aio_context() returns qemu_aio_context, whereas s->ctx is still
pointing to the iothread:
hw/scsi/virtio-scsi.c:543:
if (s->dataplane_started) {
assert(blk_get_aio_context(d->conf.blk) == s->ctx);
}
To fix this, let's stop iothreads before doing bdrv_close_all().
Cc: qemu-stable@nongnu.org
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1473326931-9699-1-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Auto complete mirror job in background to prevent from
blocking synchronously
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com>
Signed-off-by: Wang WeiWei <wangww.fnst@cn.fujitsu.com>
Message-id: 1469602913-20979-7-git-send-email-xiecl.fnst@cn.fujitsu.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Normal backup(sync='none') workflow:
step 1. NBD peformance I/O write from client to server
qcow2_co_writev
bdrv_co_writev
...
bdrv_aligned_pwritev
notifier_with_return_list_notify -> backup_do_cow
bdrv_driver_pwritev // write new contents
step 2. drive-backup sync=none
backup_do_cow
{
wait_for_overlapping_requests
cow_request_begin
for(; start < end; start++) {
bdrv_co_readv_no_serialising //read old contents from Secondary disk
bdrv_co_writev // write old contents to hidden-disk
}
cow_request_end
}
step 3. Then roll back to "step 1" to write new contents to Secondary disk.
And for replication, we must make sure that we only read the old contents from
Secondary disk in order to keep contents consistent.
1) Replication workflow of Secondary
virtio-blk
^
-------> 1 NBD |
|| server 3 replication
|| ^ ^
|| | backing backing |
|| Secondary disk 6<-------- hidden-disk 5 <-------- active-disk 4
|| | ^
|| '-------------------------'
|| drive-backup sync=none 2
Hence, we need these interfaces to implement coarse-grained serialization between
COW of Secondary disk and the read operation of replication.
Example codes about how to use them:
*#include "block/block_backup.h"
static coroutine_fn int xxx_co_readv()
{
CowRequest req;
BlockJob *job = secondary_disk->bs->job;
if (job) {
backup_wait_for_overlapping_requests(job, start, end);
backup_cow_request_begin(&req, job, start, end);
ret = bdrv_co_readv();
backup_cow_request_end(&req);
goto out;
}
ret = bdrv_co_readv();
out:
return ret;
}
Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Wang WeiWei <wangww.fnst@cn.fujitsu.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1469602913-20979-4-git-send-email-xiecl.fnst@cn.fujitsu.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Implement the new virtio sockets device for host<->guest communication
using the Sockets API. Most of the work is done in a vhost kernel
driver so that virtio-vsock can hook into the AF_VSOCK address family.
The QEMU vhost-vsock device handles configuration and live migration
while the rx/tx happens in the vhost_vsock.ko Linux kernel driver.
The vsock device must be given a CID (host-wide unique address):
# qemu -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 ...
For more information see:
http://qemu-project.org/Features/VirtioVsock
[Endianness fixes and virtio-ccw support by Claudio Imbrenda
<imbrenda@linux.vnet.ibm.com>]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
[mst: rebase to master]
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
virtqueue_discard() requires a VirtQueueElement but virtio-balloon does
not migrate its in-use element. Introduce a new function that is
similar to virtqueue_discard() but doesn't require a VirtQueueElement.
This will allow virtio-balloon to access element again after migration
with the usual proviso that the guest may have modified the vring since
last time.
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Roman Kagan <rkagan@virtuozzo.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Currently each VQ Notification Virtio Capability is allocated
on a different page. The idea is to enable split drivers within
guests, however there are no known plans to do that.
The allocation will result in a 8MB BAR, more than various
guest firmwares pre-allocates for PCI Bridges hotplug process.
Reserve 4 bytes per VQ by default and add a new parameter
"page-per-vq" to be used with split drivers.
Signed-off-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Some software algorithms are based on the hardware's cache info, for example,
for x86 linux kernel, when cpu1 want to wakeup a task on cpu2, cpu1 will trigger
a resched IPI and told cpu2 to do the wakeup if they don't share low level
cache. Oppositely, cpu1 will access cpu2's runqueue directly if they share llc.
The relevant linux-kernel code as bellow:
static void ttwu_queue(struct task_struct *p, int cpu)
{
struct rq *rq = cpu_rq(cpu);
......
if (... && !cpus_share_cache(smp_processor_id(), cpu)) {
......
ttwu_queue_remote(p, cpu); /* will trigger RES IPI */
return;
}
......
ttwu_do_activate(rq, p, 0); /* access target's rq directly */
......
}
In real hardware, the cpus on the same socket share L3 cache, so one won't
trigger a resched IPIs when wakeup a task on others. But QEMU doesn't present a
virtual L3 cache info for VM, then the linux guest will trigger lots of RES IPIs
under some workloads even if the virtual cpus belongs to the same virtual socket.
For KVM, there will be lots of vmexit due to guest send IPIs.
The workload is a SAP HANA's testsuite, we run it one round(about 40 minuates)
and observe the (Suse11sp3)Guest's amounts of RES IPIs which triggering during
the period:
No-L3 With-L3(applied this patch)
cpu0: 363890 44582
cpu1: 373405 43109
cpu2: 340783 43797
cpu3: 333854 43409
cpu4: 327170 40038
cpu5: 325491 39922
cpu6: 319129 42391
cpu7: 306480 41035
cpu8: 161139 32188
cpu9: 164649 31024
cpu10: 149823 30398
cpu11: 149823 32455
cpu12: 164830 35143
cpu13: 172269 35805
cpu14: 179979 33898
cpu15: 194505 32754
avg: 268963.6 40129.8
The VM's topology is "1*socket 8*cores 2*threads".
After present virtual L3 cache info for VM, the amounts of RES IPIs in guest
reduce 85%.
For KVM, vcpus send IPIs will cause vmexit which is expensive, so it can cause
severe performance degradation. We had tested the overall system performance if
vcpus actually run on sparate physical socket. With L3 cache, the performance
improves 7.2%~33.1%(avg:15.7%).
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This will used by the next patch.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>