* two old patches from prospective GSoC students

* i386 -kernel device tree support
 * Coverity fix
 * memory usage improvement from Peter
 * checkpatch fix
 * g_path_get_dirname cleanup
 * caching of block status for iSCSI
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQEcBAABCAAGBQJXjcwdAAoJEL/70l94x66D0s8IAKyEBxtATSXZG90jJR+uoCbv
 oK6ea9RDWUZxa2hKjcXh9Be+g4pTv99BqTKJJt+uwkFHAAJl0gvVty+EHE/2sfyo
 Nlt9FlWibxBdSoHxeCq4jg9APWORxcSx3rspg0I8TdxbweKdm9onvXEjfvmhucqG
 FfPSIHg5vsoutCPEDTXfaJDFiLw+rV7Em53kxD/y4VhHZlAWhahpCHSL/lWGRoLp
 B0mKvoHhqHR/EqJr7y7fuga+Aoimyh8R6dUfpxuXQely3309V7znhq0erPQWvSwX
 JKRITmGbWW3HOjplqBT971eH5iH0bDEryx91Oas9VNpm9eGr6qygePhc1eMszxE=
 =pBxf
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* two old patches from prospective GSoC students
* i386 -kernel device tree support
* Coverity fix
* memory usage improvement from Peter
* checkpatch fix
* g_path_get_dirname cleanup
* caching of block status for iSCSI

# gpg: Signature made Tue 19 Jul 2016 07:43:41 BST
# gpg:                using RSA key 0xBFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream:
  target-i386: Remove redundant HF_SOFTMMU_MASK
  block/iscsi: allow caching of the allocation map
  block/iscsi: fix rounding in iscsi_allocationmap_set
  Move README to markdown
  cpu-exec: Move down some declarations in cpu_exec()
  exec: avoid realloc in phys_map_node_reserve
  checkpatch: consider git extended headers valid patches
  megasas: remove useless check for cmd->frame
  compiler: never omit assertions if using a static analysis tool
  hw/i386: add device tree support
  Changed malloc to g_malloc, free to g_free in bsd-user/qemu.h
  use g_path_get_dirname instead of dirname

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit is contained in:
Peter Maydell 2016-07-19 15:08:05 +01:00
commit a3b3437721
14 changed files with 280 additions and 106 deletions

View File

@ -1,5 +1,5 @@
QEMU README
===========
QEMU
---
QEMU is a generic and open source machine & userspace emulator and
virtualizer.
@ -31,31 +31,31 @@ version 2. For full licensing details, consult the LICENSE file.
Building
========
---
QEMU is multi-platform software intended to be buildable on all modern
Linux platforms, OS-X, Win32 (via the Mingw64 toolchain) and a variety
of other UNIX targets. The simple steps to build QEMU are:
mkdir build
cd build
../configure
make
mkdir build
cd build
../configure
make
Complete details of the process for building and configuring QEMU for
all supported host platforms can be found in the qemu-tech.html file.
Additional information can also be found online via the QEMU website:
http://qemu-project.org/Hosts/Linux
http://qemu-project.org/Hosts/W32
http://qemu-project.org/Hosts/Linux
http://qemu-project.org/Hosts/W32
Submitting patches
==================
---
The QEMU source code is maintained under the GIT version control system.
git clone git://git.qemu-project.org/qemu.git
git clone git://git.qemu-project.org/qemu.git
When submitting patches, the preferred approach is to use 'git
format-patch' and/or 'git send-email' to format & send the mail to the
@ -66,18 +66,18 @@ guidelines set out in the HACKING and CODING_STYLE files.
Additional information on submitting patches can be found online via
the QEMU website
http://qemu-project.org/Contribute/SubmitAPatch
http://qemu-project.org/Contribute/TrivialPatches
http://qemu-project.org/Contribute/SubmitAPatch
http://qemu-project.org/Contribute/TrivialPatches
Bug reporting
=============
---
The QEMU project uses Launchpad as its primary upstream bug tracker. Bugs
found when running code built from QEMU git or upstream released sources
should be reported via:
https://bugs.launchpad.net/qemu/
https://bugs.launchpad.net/qemu/
If using QEMU via an operating system vendor pre-built binary package, it
is preferable to report bugs to the vendor's own bug tracker first. If
@ -86,22 +86,21 @@ reported via launchpad.
For additional information on bug reporting consult:
http://qemu-project.org/Contribute/ReportABug
http://qemu-project.org/Contribute/ReportABug
Contact
=======
---
The QEMU community can be contacted in a number of ways, with the two
main methods being email and IRC
- qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel
- #qemu on irc.oftc.net
- Mailing List: qemu-devel@nongnu.org
- Archives: http://lists.nongnu.org/mailman/listinfo/qemu-devel
- IRC: #qemu on irc.oftc.net
Information on additional methods of contacting the community can be
found online via the QEMU website:
http://qemu-project.org/Contribute/StartHere
-- End

View File

@ -2,7 +2,7 @@
* QEMU Block driver for iSCSI images
*
* Copyright (c) 2010-2011 Ronnie Sahlberg <ronniesahlberg@gmail.com>
* Copyright (c) 2012-2015 Peter Lieven <pl@kamp.de>
* Copyright (c) 2012-2016 Peter Lieven <pl@kamp.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@ -61,7 +61,23 @@ typedef struct IscsiLun {
struct scsi_inquiry_logical_block_provisioning lbp;
struct scsi_inquiry_block_limits bl;
unsigned char *zeroblock;
unsigned long *allocationmap;
/* The allocmap tracks which clusters (pages) on the iSCSI target are
* allocated and which are not. In case a target returns zeros for
* unallocated pages (iscsilun->lprz) we can directly return zeros instead
* of reading zeros over the wire if a read request falls within an
* unallocated block. As there are 3 possible states we need 2 bitmaps to
* track. allocmap_valid keeps track if QEMU's information about a page is
* valid. allocmap tracks if a page is allocated or not. In case QEMU has no
* valid information about a page the corresponding allocmap entry should be
* switched to unallocated as well to force a new lookup of the allocation
* status as lookups are generally skipped if a page is suspect to be
* allocated. If a iSCSI target is opened with cache.direct = on the
* allocmap_valid does not exist turning all cached information invalid so
* that a fresh lookup is made for any page even if allocmap entry returns
* it's unallocated. */
unsigned long *allocmap;
unsigned long *allocmap_valid;
long allocmap_size;
int cluster_sectors;
bool use_16_for_rw;
bool write_protected;
@ -422,37 +438,135 @@ static bool is_sector_request_lun_aligned(int64_t sector_num, int nb_sectors,
iscsilun);
}
static unsigned long *iscsi_allocationmap_init(IscsiLun *iscsilun)
static void iscsi_allocmap_free(IscsiLun *iscsilun)
{
return bitmap_try_new(DIV_ROUND_UP(sector_lun2qemu(iscsilun->num_blocks,
iscsilun),
iscsilun->cluster_sectors));
g_free(iscsilun->allocmap);
g_free(iscsilun->allocmap_valid);
iscsilun->allocmap = NULL;
iscsilun->allocmap_valid = NULL;
}
static void iscsi_allocationmap_set(IscsiLun *iscsilun, int64_t sector_num,
int nb_sectors)
static int iscsi_allocmap_init(IscsiLun *iscsilun, int open_flags)
{
if (iscsilun->allocationmap == NULL) {
return;
iscsi_allocmap_free(iscsilun);
iscsilun->allocmap_size =
DIV_ROUND_UP(sector_lun2qemu(iscsilun->num_blocks, iscsilun),
iscsilun->cluster_sectors);
iscsilun->allocmap = bitmap_try_new(iscsilun->allocmap_size);
if (!iscsilun->allocmap) {
return -ENOMEM;
}
bitmap_set(iscsilun->allocationmap,
sector_num / iscsilun->cluster_sectors,
DIV_ROUND_UP(nb_sectors, iscsilun->cluster_sectors));
if (open_flags & BDRV_O_NOCACHE) {
/* in case that cache.direct = on all allocmap entries are
* treated as invalid to force a relookup of the block
* status on every read request */
return 0;
}
iscsilun->allocmap_valid = bitmap_try_new(iscsilun->allocmap_size);
if (!iscsilun->allocmap_valid) {
/* if we are under memory pressure free the allocmap as well */
iscsi_allocmap_free(iscsilun);
return -ENOMEM;
}
return 0;
}
static void iscsi_allocationmap_clear(IscsiLun *iscsilun, int64_t sector_num,
int nb_sectors)
static void
iscsi_allocmap_update(IscsiLun *iscsilun, int64_t sector_num,
int nb_sectors, bool allocated, bool valid)
{
int64_t cluster_num, nb_clusters;
if (iscsilun->allocationmap == NULL) {
int64_t cl_num_expanded, nb_cls_expanded, cl_num_shrunk, nb_cls_shrunk;
if (iscsilun->allocmap == NULL) {
return;
}
cluster_num = DIV_ROUND_UP(sector_num, iscsilun->cluster_sectors);
nb_clusters = (sector_num + nb_sectors) / iscsilun->cluster_sectors
- cluster_num;
if (nb_clusters > 0) {
bitmap_clear(iscsilun->allocationmap, cluster_num, nb_clusters);
/* expand to entirely contain all affected clusters */
cl_num_expanded = sector_num / iscsilun->cluster_sectors;
nb_cls_expanded = DIV_ROUND_UP(sector_num + nb_sectors,
iscsilun->cluster_sectors) - cl_num_expanded;
/* shrink to touch only completely contained clusters */
cl_num_shrunk = DIV_ROUND_UP(sector_num, iscsilun->cluster_sectors);
nb_cls_shrunk = (sector_num + nb_sectors) / iscsilun->cluster_sectors
- cl_num_shrunk;
if (allocated) {
bitmap_set(iscsilun->allocmap, cl_num_expanded, nb_cls_expanded);
} else {
bitmap_clear(iscsilun->allocmap, cl_num_shrunk, nb_cls_shrunk);
}
if (iscsilun->allocmap_valid == NULL) {
return;
}
if (valid) {
bitmap_set(iscsilun->allocmap_valid, cl_num_shrunk, nb_cls_shrunk);
} else {
bitmap_clear(iscsilun->allocmap_valid, cl_num_expanded,
nb_cls_expanded);
}
}
static void
iscsi_allocmap_set_allocated(IscsiLun *iscsilun, int64_t sector_num,
int nb_sectors)
{
iscsi_allocmap_update(iscsilun, sector_num, nb_sectors, true, true);
}
static void
iscsi_allocmap_set_unallocated(IscsiLun *iscsilun, int64_t sector_num,
int nb_sectors)
{
/* Note: if cache.direct=on the fifth argument to iscsi_allocmap_update
* is ignored, so this will in effect be an iscsi_allocmap_set_invalid.
*/
iscsi_allocmap_update(iscsilun, sector_num, nb_sectors, false, true);
}
static void iscsi_allocmap_set_invalid(IscsiLun *iscsilun, int64_t sector_num,
int nb_sectors)
{
iscsi_allocmap_update(iscsilun, sector_num, nb_sectors, false, false);
}
static void iscsi_allocmap_invalidate(IscsiLun *iscsilun)
{
if (iscsilun->allocmap) {
bitmap_zero(iscsilun->allocmap, iscsilun->allocmap_size);
}
if (iscsilun->allocmap_valid) {
bitmap_zero(iscsilun->allocmap_valid, iscsilun->allocmap_size);
}
}
static inline bool
iscsi_allocmap_is_allocated(IscsiLun *iscsilun, int64_t sector_num,
int nb_sectors)
{
unsigned long size;
if (iscsilun->allocmap == NULL) {
return true;
}
size = DIV_ROUND_UP(sector_num + nb_sectors, iscsilun->cluster_sectors);
return !(find_next_bit(iscsilun->allocmap, size,
sector_num / iscsilun->cluster_sectors) == size);
}
static inline bool iscsi_allocmap_is_valid(IscsiLun *iscsilun,
int64_t sector_num, int nb_sectors)
{
unsigned long size;
if (iscsilun->allocmap_valid == NULL) {
return false;
}
size = DIV_ROUND_UP(sector_num + nb_sectors, iscsilun->cluster_sectors);
return (find_next_zero_bit(iscsilun->allocmap_valid, size,
sector_num / iscsilun->cluster_sectors) == size);
}
static int coroutine_fn
@ -515,26 +629,16 @@ retry:
}
if (iTask.status != SCSI_STATUS_GOOD) {
iscsi_allocmap_set_invalid(iscsilun, sector_num, nb_sectors);
return iTask.err_code;
}
iscsi_allocationmap_set(iscsilun, sector_num, nb_sectors);
iscsi_allocmap_set_allocated(iscsilun, sector_num, nb_sectors);
return 0;
}
static bool iscsi_allocationmap_is_allocated(IscsiLun *iscsilun,
int64_t sector_num, int nb_sectors)
{
unsigned long size;
if (iscsilun->allocationmap == NULL) {
return true;
}
size = DIV_ROUND_UP(sector_num + nb_sectors, iscsilun->cluster_sectors);
return !(find_next_bit(iscsilun->allocationmap, size,
sector_num / iscsilun->cluster_sectors) == size);
}
static int64_t coroutine_fn iscsi_co_get_block_status(BlockDriverState *bs,
int64_t sector_num,
@ -619,9 +723,9 @@ retry:
}
if (ret & BDRV_BLOCK_ZERO) {
iscsi_allocationmap_clear(iscsilun, sector_num, *pnum);
iscsi_allocmap_set_unallocated(iscsilun, sector_num, *pnum);
} else {
iscsi_allocationmap_set(iscsilun, sector_num, *pnum);
iscsi_allocmap_set_allocated(iscsilun, sector_num, *pnum);
}
if (*pnum > nb_sectors) {
@ -657,17 +761,32 @@ static int coroutine_fn iscsi_co_readv(BlockDriverState *bs,
return -EINVAL;
}
if (iscsilun->lbprz && nb_sectors >= ISCSI_CHECKALLOC_THRES &&
!iscsi_allocationmap_is_allocated(iscsilun, sector_num, nb_sectors)) {
int64_t ret;
/* if cache.direct is off and we have a valid entry in our allocation map
* we can skip checking the block status and directly return zeroes if
* the request falls within an unallocated area */
if (iscsi_allocmap_is_valid(iscsilun, sector_num, nb_sectors) &&
!iscsi_allocmap_is_allocated(iscsilun, sector_num, nb_sectors)) {
qemu_iovec_memset(iov, 0, 0x00, iov->size);
return 0;
}
if (nb_sectors >= ISCSI_CHECKALLOC_THRES &&
!iscsi_allocmap_is_valid(iscsilun, sector_num, nb_sectors) &&
!iscsi_allocmap_is_allocated(iscsilun, sector_num, nb_sectors)) {
int pnum;
BlockDriverState *file;
ret = iscsi_co_get_block_status(bs, sector_num,
BDRV_REQUEST_MAX_SECTORS, &pnum, &file);
/* check the block status from the beginning of the cluster
* containing the start sector */
int64_t ret = iscsi_co_get_block_status(bs,
sector_num - sector_num % iscsilun->cluster_sectors,
BDRV_REQUEST_MAX_SECTORS, &pnum, &file);
if (ret < 0) {
return ret;
}
if (ret & BDRV_BLOCK_ZERO && pnum >= nb_sectors) {
/* if the whole request falls into an unallocated area we can avoid
* to read and directly return zeroes instead */
if (ret & BDRV_BLOCK_ZERO &&
pnum >= nb_sectors + sector_num % iscsilun->cluster_sectors) {
qemu_iovec_memset(iov, 0, 0x00, iov->size);
return 0;
}
@ -981,7 +1100,7 @@ retry:
return iTask.err_code;
}
iscsi_allocationmap_clear(iscsilun, sector_num, nb_sectors);
iscsi_allocmap_set_invalid(iscsilun, sector_num, nb_sectors);
return 0;
}
@ -1071,15 +1190,17 @@ retry:
}
if (iTask.status != SCSI_STATUS_GOOD) {
iscsi_allocmap_set_invalid(iscsilun, offset >> BDRV_SECTOR_BITS,
count >> BDRV_SECTOR_BITS);
return iTask.err_code;
}
if (flags & BDRV_REQ_MAY_UNMAP) {
iscsi_allocationmap_clear(iscsilun, offset >> BDRV_SECTOR_BITS,
count >> BDRV_SECTOR_BITS);
iscsi_allocmap_set_invalid(iscsilun, offset >> BDRV_SECTOR_BITS,
count >> BDRV_SECTOR_BITS);
} else {
iscsi_allocationmap_set(iscsilun, offset >> BDRV_SECTOR_BITS,
count >> BDRV_SECTOR_BITS);
iscsi_allocmap_set_allocated(iscsilun, offset >> BDRV_SECTOR_BITS,
count >> BDRV_SECTOR_BITS);
}
return 0;
@ -1652,10 +1773,7 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
iscsilun->cluster_sectors = (iscsilun->bl.opt_unmap_gran *
iscsilun->block_size) >> BDRV_SECTOR_BITS;
if (iscsilun->lbprz) {
iscsilun->allocationmap = iscsi_allocationmap_init(iscsilun);
if (iscsilun->allocationmap == NULL) {
ret = -ENOMEM;
}
ret = iscsi_allocmap_init(iscsilun, bs->open_flags);
}
}
@ -1692,7 +1810,7 @@ static void iscsi_close(BlockDriverState *bs)
}
iscsi_destroy_context(iscsi);
g_free(iscsilun->zeroblock);
g_free(iscsilun->allocationmap);
iscsi_allocmap_free(iscsilun);
memset(iscsilun, 0, sizeof(IscsiLun));
}
@ -1756,6 +1874,16 @@ static int iscsi_reopen_prepare(BDRVReopenState *state,
return 0;
}
static void iscsi_reopen_commit(BDRVReopenState *reopen_state)
{
IscsiLun *iscsilun = reopen_state->bs->opaque;
/* the cache.direct status might have changed */
if (iscsilun->allocmap != NULL) {
iscsi_allocmap_init(iscsilun, reopen_state->flags);
}
}
static int iscsi_truncate(BlockDriverState *bs, int64_t offset)
{
IscsiLun *iscsilun = bs->opaque;
@ -1775,9 +1903,8 @@ static int iscsi_truncate(BlockDriverState *bs, int64_t offset)
return -EINVAL;
}
if (iscsilun->allocationmap != NULL) {
g_free(iscsilun->allocationmap);
iscsilun->allocationmap = iscsi_allocationmap_init(iscsilun);
if (iscsilun->allocmap != NULL) {
iscsi_allocmap_init(iscsilun, bs->open_flags);
}
return 0;
@ -1837,6 +1964,13 @@ static int iscsi_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
return 0;
}
static void iscsi_invalidate_cache(BlockDriverState *bs,
Error **errp)
{
IscsiLun *iscsilun = bs->opaque;
iscsi_allocmap_invalidate(iscsilun);
}
static QemuOptsList iscsi_create_opts = {
.name = "iscsi-create-opts",
.head = QTAILQ_HEAD_INITIALIZER(iscsi_create_opts.head),
@ -1860,7 +1994,9 @@ static BlockDriver bdrv_iscsi = {
.bdrv_close = iscsi_close,
.bdrv_create = iscsi_create,
.create_opts = &iscsi_create_opts,
.bdrv_reopen_prepare = iscsi_reopen_prepare,
.bdrv_reopen_prepare = iscsi_reopen_prepare,
.bdrv_reopen_commit = iscsi_reopen_commit,
.bdrv_invalidate_cache = iscsi_invalidate_cache,
.bdrv_getlength = iscsi_getlength,
.bdrv_get_info = iscsi_get_info,

View File

@ -358,7 +358,7 @@ static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy
#ifdef DEBUG_REMAP
{
void *addr;
addr = malloc(len);
addr = g_malloc(len);
if (copy)
memcpy(addr, g2h(guest_addr), len);
else
@ -384,7 +384,7 @@ static inline void unlock_user(void *host_ptr, abi_ulong guest_addr,
return;
if (len > 0)
memcpy(g2h(guest_addr), host_ptr, len);
free(host_ptr);
g_free(host_ptr);
#endif
}

View File

@ -608,17 +608,16 @@ int cpu_exec(CPUState *cpu)
init_delay_params(&sc, cpu);
for(;;) {
TranslationBlock *tb, *last_tb;
int tb_exit = 0;
/* prepare setjmp context for exception handling */
if (sigsetjmp(cpu->jmp_env, 0) == 0) {
TranslationBlock *tb, *last_tb = NULL;
int tb_exit = 0;
/* if an exception is pending, we execute it here */
if (cpu_handle_exception(cpu, &ret)) {
break;
}
last_tb = NULL; /* forget the last executed TB after exception */
cpu->tb_flushed = false; /* reset before first TB lookup */
for(;;) {
cpu_handle_interrupt(cpu, &last_tb);

4
exec.c
View File

@ -187,10 +187,12 @@ struct CPUAddressSpace {
static void phys_map_node_reserve(PhysPageMap *map, unsigned nodes)
{
static unsigned alloc_hint = 16;
if (map->nodes_nb + nodes > map->nodes_nb_alloc) {
map->nodes_nb_alloc = MAX(map->nodes_nb_alloc * 2, 16);
map->nodes_nb_alloc = MAX(map->nodes_nb_alloc, alloc_hint);
map->nodes_nb_alloc = MAX(map->nodes_nb_alloc, map->nodes_nb + nodes);
map->nodes = g_renew(Node, map->nodes, map->nodes_nb_alloc);
alloc_hint = map->nodes_nb_alloc;
}
}

View File

@ -812,11 +812,26 @@ static long get_file_size(FILE *f)
return size;
}
/* setup_data types */
#define SETUP_NONE 0
#define SETUP_E820_EXT 1
#define SETUP_DTB 2
#define SETUP_PCI 3
#define SETUP_EFI 4
struct setup_data {
uint64_t next;
uint32_t type;
uint32_t len;
uint8_t data[0];
} __attribute__((packed));
static void load_linux(PCMachineState *pcms,
FWCfgState *fw_cfg)
{
uint16_t protocol;
int setup_size, kernel_size, initrd_size = 0, cmdline_size;
int dtb_size, setup_data_offset;
uint32_t initrd_max;
uint8_t header[8192], *setup, *kernel, *initrd_data;
hwaddr real_addr, prot_addr, cmdline_addr, initrd_addr = 0;
@ -824,8 +839,10 @@ static void load_linux(PCMachineState *pcms,
char *vmode;
MachineState *machine = MACHINE(pcms);
PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
struct setup_data *setup_data;
const char *kernel_filename = machine->kernel_filename;
const char *initrd_filename = machine->initrd_filename;
const char *dtb_filename = machine->dtb;
const char *kernel_cmdline = machine->kernel_cmdline;
/* Align to 16 bytes as a paranoia measure */
@ -988,6 +1005,35 @@ static void load_linux(PCMachineState *pcms,
exit(1);
}
fclose(f);
/* append dtb to kernel */
if (dtb_filename) {
if (protocol < 0x209) {
fprintf(stderr, "qemu: Linux kernel too old to load a dtb\n");
exit(1);
}
dtb_size = get_image_size(dtb_filename);
if (dtb_size <= 0) {
fprintf(stderr, "qemu: error reading dtb %s: %s\n",
dtb_filename, strerror(errno));
exit(1);
}
setup_data_offset = QEMU_ALIGN_UP(kernel_size, 16);
kernel_size = setup_data_offset + sizeof(struct setup_data) + dtb_size;
kernel = g_realloc(kernel, kernel_size);
stq_p(header+0x250, prot_addr + setup_data_offset);
setup_data = (struct setup_data *)(kernel + setup_data_offset);
setup_data->next = 0;
setup_data->type = cpu_to_le32(SETUP_DTB);
setup_data->len = cpu_to_le32(dtb_size);
load_image_size(dtb_filename, setup_data->data, dtb_size);
}
memcpy(setup, header, MIN(sizeof(header), setup_size));
fw_cfg_add_i32(fw_cfg, FW_CFG_KERNEL_ADDR, prot_addr);

View File

@ -1981,11 +1981,7 @@ static void megasas_handle_frame(MegasasState *s, uint64_t frame_addr,
break;
}
if (frame_status != MFI_STAT_INVALID_STATUS) {
if (cmd->frame) {
cmd->frame->header.cmd_status = frame_status;
} else {
megasas_frame_set_cmd_status(s, frame_addr, frame_status);
}
cmd->frame->header.cmd_status = frame_status;
megasas_unmap_frame(s, cmd);
megasas_complete_frame(s, cmd->context);
}

View File

@ -3,6 +3,9 @@
#ifndef COMPILER_H
#define COMPILER_H
#if defined __clang_analyzer__ || defined __COVERITY__
#define QEMU_STATIC_ANALYSIS 1
#endif
/*----------------------------------------------------------------------------
| The macro QEMU_GNUC_PREREQ tests for minimum version of the GNU C compiler.

View File

@ -89,7 +89,7 @@ char *os_find_datadir(void)
if (exec_dir == NULL) {
return NULL;
}
dir = dirname(exec_dir);
dir = g_path_get_dirname(exec_dir);
max_len = strlen(dir) +
MAX(strlen(SHARE_SUFFIX), strlen(BUILD_SUFFIX)) + 1;
@ -103,6 +103,7 @@ char *os_find_datadir(void)
}
}
g_free(dir);
g_free(exec_dir);
return res;
}

View File

@ -2725,9 +2725,6 @@ static void x86_cpu_reset(CPUState *s)
/* init to reset state */
#ifdef CONFIG_SOFTMMU
env->hflags |= HF_SOFTMMU_MASK;
#endif
env->hflags2 |= HF2_GIF_MASK;
cpu_x86_update_cr0(env, 0x60000010);

View File

@ -130,8 +130,6 @@
positions to ease oring with eflags. */
/* current cpl */
#define HF_CPL_SHIFT 0
/* true if soft mmu is being used */
#define HF_SOFTMMU_SHIFT 2
/* true if hardware interrupts must be disabled for next instruction */
#define HF_INHIBIT_IRQ_SHIFT 3
/* 16 or 32 segments */
@ -161,7 +159,6 @@
#define HF_MPX_IU_SHIFT 26 /* BND registers in-use */
#define HF_CPL_MASK (3 << HF_CPL_SHIFT)
#define HF_SOFTMMU_MASK (1 << HF_SOFTMMU_SHIFT)
#define HF_INHIBIT_IRQ_MASK (1 << HF_INHIBIT_IRQ_SHIFT)
#define HF_CS32_MASK (1 << HF_CS32_SHIFT)
#define HF_SS32_MASK (1 << HF_SS32_SHIFT)

View File

@ -8224,9 +8224,9 @@ void gen_intermediate_code(CPUX86State *env, TranslationBlock *tb)
dc->popl_esp_hack = 0;
/* select memory access functions */
dc->mem_index = 0;
if (flags & HF_SOFTMMU_MASK) {
dc->mem_index = cpu_mmu_index(env, false);
}
#ifdef CONFIG_SOFTMMU
dc->mem_index = cpu_mmu_index(env, false);
#endif
dc->cpuid_features = env->features[FEAT_1_EDX];
dc->cpuid_ext_features = env->features[FEAT_1_ECX];
dc->cpuid_ext2_features = env->features[FEAT_8000_0001_EDX];
@ -8239,11 +8239,7 @@ void gen_intermediate_code(CPUX86State *env, TranslationBlock *tb)
#endif
dc->flags = flags;
dc->jmp_opt = !(dc->tf || cs->singlestep_enabled ||
(flags & HF_INHIBIT_IRQ_MASK)
#ifndef CONFIG_SOFTMMU
|| (flags & HF_SOFTMMU_MASK)
#endif
);
(flags & HF_INHIBIT_IRQ_MASK));
/* Do not optimize repz jumps at all in icount mode, because
rep movsS instructions are execured with different paths
in !repz_opt and repz_opt modes. The first one was used

View File

@ -191,7 +191,7 @@ typedef uint64_t tcg_insn_unit;
#endif
#ifdef CONFIG_DEBUG_TCG
#if defined CONFIG_DEBUG_TCG || defined QEMU_STATIC_ANALYSIS
# define tcg_debug_assert(X) do { assert(X); } while (0)
#elif QEMU_GNUC_PREREQ(4, 5)
# define tcg_debug_assert(X) \

View File

@ -299,9 +299,11 @@ void qemu_init_exec_dir(const char *argv0)
return;
}
}
dir = dirname(p);
dir = g_path_get_dirname(p);
pstrcpy(exec_dir, sizeof(exec_dir), dir);
g_free(dir);
}
char *qemu_get_exec_dir(void)