Currently the Makefile creates TAGS for emacs with the command:
find "$(SRC_PATH)" -name '*.[hc]' -print0 | xargs -0 etags
That works only if xargs ends up invoking etags just once. If xargs runs
etags several times, as it will if there are enough files, then the later
invocations will overwrite the output from the earlier invocations. This
patch uses the etags --append option to fix the bug.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Message-id: 1363057048-21534-1-git-send-email-david@gibson.dropbear.id.au
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
# By Stefan Hajnoczi (14) and others
# Via Stefan Hajnoczi
* stefanha/block: (28 commits)
blockdev: Fix up copyright and permission notice
qemu-iotests: use -nographic in test case 007
qemu-iotests: add tests for rebasing zero clusters
dataplane: fix hang introduced by AioContext transition
coroutine: use AioContext for CoQueue BH
threadpool: drop global thread pool
block: add bdrv_get_aio_context()
aio: add a ThreadPool instance to AioContext
threadpool: add thread_pool_new() and thread_pool_free()
threadpool: move globals into struct ThreadPool
main-loop: add qemu_get_aio_context()
sheepdog: set io_flush handler in do_co_req
sheepdog: use non-blocking fd in coroutine context
qcow2: make is_allocated return true for zero clusters
qcow2: drop unnecessary flush in qcow2_update_snapshot_refcount()
qcow2: drop flush in update_cluster_refcount()
qcow2: flush in qcow2_update_snapshot_refcount()
qcow2: set L2 cache dependency in qcow2_alloc_bytes()
qcow2: flush refcount cache correctly in qcow2_write_snapshots()
qcow2: flush refcount cache correctly in alloc_refcount_block()
...
# By Christian Borntraeger (1) and Cornelia Huck (1)
# Via Cornelia Huck
* cohuck/virtio-ccw-upstr:
virtio-ccw: Wire up virtio-rng.
virtio-ccw: remove qdev_unparent in unplug routing
Screwed up in commit 666daa68. Thanks to Kevin Wolf for reminding me
to fix this.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
A comment explains that -nographic hangs test case 007. This is no
longer the case so add -nographic. This makes the test suite faster and
more pleasant to run since no windows pop up.
I am not sure exactly when -nographic starting working for this case but
there is no fundamental reason why graphics are needed here. Make sure
the serial port is not on stdio, it would conflict with the monitor.
Also remove unnecessary trailing whitespace on these lines.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Kevin Wolf <kwolf@redhat.com>
If zero clusters are erroneously treated as unallocated, "qemu-img rebase"
will copy the backing file's contents onto the cluster.
The bug existed also in image streaming, but since the root cause was in
qcow2's is_allocated implementation it is enough to test it with qemu-img.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The bug is that the EventNotifiers do have a NULL io_flush callback.
Because _none_ of the callbacks on the dataplane AioContext have such a
callback, aio_poll will simply do nothing. Fixed by adding the callbacks:
the ioeventfd will always be polled (this can change in the future to
pause/resume the processing during live snapshots or similar operations);
the ioqueue will be polled if there are outstanding requests.
I must admit I have screwed up my testing somehow, because commit
2c20e71 does not work even if cherry-picked on top of 1.4.0, and this
patch fixes it there as well.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
CoQueue uses a BH to awake coroutines that were made ready to run again
using qemu_co_queue_next() or qemu_co_queue_restart_all(). The BH
currently runs in the iothread AioContext and would break coroutines
that run in a different AioContext.
This is a slightly tricky problem because the lifetime of the BH exceeds
that of the CoQueue. This means coroutines can be awoken after CoQueue
itself has been freed. Also, there is no qemu_co_queue_destroy()
function which we could use to handle freeing resources.
Introducing qemu_co_queue_destroy() has a ripple effect of requiring us
to also add qemu_co_mutex_destroy() and qemu_co_rwlock_destroy(), as
well as updating all callers. Avoid doing that.
We also cannot switch from BH to GIdle function because aio_poll() does
not dispatch GIdle functions. (GIdle functions make memory management
slightly easier because they free themselves.)
Finally, I don't want to move unlock_queue and unlock_bh into
AioContext. That would break encapsulation - AioContext isn't supposed
to know about CoQueue.
This patch implements a different solution: each qemu_co_queue_next() or
qemu_co_queue_restart_all() call creates a new BH and list of coroutines
to wake up. Callers tend to invoke qemu_co_queue_next() and
qemu_co_queue_restart_all() occasionally after blocking I/O, so creating
a new BH for each call shouldn't be massively inefficient.
Note that this patch does not add an interface for specifying the
AioContext. That is left to future patches which will convert CoQueue,
CoMutex, and CoRwlock to expose AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Now that each AioContext has a ThreadPool and the main loop AioContext
can be fetched with bdrv_get_aio_context(), we can eliminate the concept
of a global thread pool from thread-pool.c.
The submit functions must take a ThreadPool* argument.
block/raw-posix.c and block/raw-win32.c use
aio_get_thread_pool(bdrv_get_aio_context(bs)) to fetch the main loop's
ThreadPool.
tests/test-thread-pool.c must be updated to reflect the new
thread_pool_submit() function prototypes.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
For now bdrv_get_aio_context() is just a stub that calls
qemu_aio_get_context() since the block layer is currently tied to the
main loop AioContext.
Add the stub now so that the block layer can begin accessing its
AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
This patch adds a ThreadPool to AioContext. It's possible that some
AioContext instances will never use the ThreadPool, so defer creation
until aio_get_thread_pool().
The reason why AioContext should have the ThreadPool is because the
ThreadPool is bound to a AioContext instance where the work item's
callback function is invoked. It doesn't make sense to keep the
ThreadPool pointer anywhere other than AioContext. For example,
block/raw-posix.c can get its AioContext's ThreadPool and submit work.
Special note about headers: I used struct ThreadPool in aio.h because
there is a circular dependency if aio.h includes thread-pool.h.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
ThreadPool is tied to an AioContext through its event notifier, which
dictates in which AioContext the work item's callback function will be
invoked.
In order to support multiple AioContexts we need to support multiple
ThreadPool instances.
This patch adds the new/free functions. The free function deserves
special attention because it quiesces remaining worker threads. This
requires a new condition variable and a "stopping" flag to let workers
know they should terminate once idle.
We never needed to do this before since the global threadpool was not
explicitly destroyed until process termination.
Also stash the AioContext pointer in ThreadPool so that we can call
aio_set_event_notifier() in thread_pool_free(). We didn't need to hold
onto AioContext previously since there was no free function.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Move global variables into a struct so multiple thread pools can be
supported in the future.
This patch does not change thread-pool.h interfaces. There is still a
global thread pool and it is not yet possible to create/destroy
individual thread pools. Moving the variables into a struct first makes
later patches easier to review.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
It is very useful to get the main loop AioContext, which is a static
variable in main-loop.c.
I'm not sure whether qemu_get_aio_context() will be necessary in the
future once devices focus on using their own AioContext instead of the
main loop AioContext, but for now it allows us to refactor code to
support multiple AioContext while actually passing the main loop
AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
If an io_flush handler is not set, qemu_aio_wait doesn't invoke
callbacks.
Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Using a blocking socket in the coroutine context reduces the chance of
switching to other work. This patch makes the sheepdog driver use a
non-blocking fd always.
Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Otherwise, live migration of the top layer will miss zero clusters and
let the backing file show through. This also matches what is done in qed.
QCOW2_CLUSTER_ZERO clusters are invalid in v2 image files. Check this
directly in qcow2_get_cluster_offset instead of replicating the test
everywhere.
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
We already flush when the function completes. There is no need to flush
after every compressed cluster.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>