Vulkan support.

Gtk port support. Breaks other platforms.
This commit is contained in:
BearOso 2022-06-14 19:09:51 -05:00
parent 109fedf42c
commit 259dfd07ae
87 changed files with 257547 additions and 55 deletions

3
.gitmodules vendored
View File

@ -13,6 +13,3 @@
[submodule "shaders/glslang"] [submodule "shaders/glslang"]
path = external/glslang path = external/glslang
url = https://github.com/KhronosGroup/glslang.git url = https://github.com/KhronosGroup/glslang.git
[submodule "vulkan/vulkan"]
path = external/vulkan-headers
url = https://github.com/KhronosGroup/Vulkan-Headers.git

@ -1 +1 @@
Subproject commit 1458bae62ec67ea7d12c5a13b740e23ed4bb226c Subproject commit 197a273fd494321157f40a962c51b5fa8c9c3581

View File

@ -0,0 +1,116 @@
CC0 1.0 Universal
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator and
subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for the
purpose of contributing to a commons of creative, cultural and scientific
works ("Commons") that the public can reliably and without fear of later
claims of infringement build upon, modify, incorporate in other works, reuse
and redistribute as freely as possible in any form whatsoever and for any
purposes, including without limitation commercial purposes. These owners may
contribute to the Commons to promote the ideal of a free culture and the
further production of creative, cultural and scientific works, or to gain
reputation or greater distribution for their Work in part through the use and
efforts of others.
For these and/or other purposes and motivations, and without any expectation
of additional consideration or compensation, the person associating CC0 with a
Work (the "Affirmer"), to the extent that he or she is an owner of Copyright
and Related Rights in the Work, voluntarily elects to apply CC0 to the Work
and publicly distribute the Work under its terms, with knowledge of his or her
Copyright and Related Rights in the Work and the meaning and intended legal
effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not limited
to, the following:
i. the right to reproduce, adapt, distribute, perform, display, communicate,
and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or likeness
depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data in
a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation thereof,
including any amended or successor version of such directive); and
vii. other similar, equivalent or corresponding rights throughout the world
based on applicable law or treaty, and any national implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention of,
applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and
unconditionally waives, abandons, and surrenders all of Affirmer's Copyright
and Related Rights and associated claims and causes of action, whether now
known or unknown (including existing as well as future claims and causes of
action), in the Work (i) in all territories worldwide, (ii) for the maximum
duration provided by applicable law or treaty (including future time
extensions), (iii) in any current or future medium and for any number of
copies, and (iv) for any purpose whatsoever, including without limitation
commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes
the Waiver for the benefit of each member of the public at large and to the
detriment of Affirmer's heirs and successors, fully intending that such Waiver
shall not be subject to revocation, rescission, cancellation, termination, or
any other legal or equitable action to disrupt the quiet enjoyment of the Work
by the public as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason be
judged legally invalid or ineffective under applicable law, then the Waiver
shall be preserved to the maximum extent permitted taking into account
Affirmer's express Statement of Purpose. In addition, to the extent the Waiver
is so judged Affirmer hereby grants to each affected person a royalty-free,
non transferable, non sublicensable, non exclusive, irrevocable and
unconditional license to exercise Affirmer's Copyright and Related Rights in
the Work (i) in all territories worldwide, (ii) for the maximum duration
provided by applicable law or treaty (including future time extensions), (iii)
in any current or future medium and for any number of copies, and (iv) for any
purpose whatsoever, including without limitation commercial, advertising or
promotional purposes (the "License"). The License shall be deemed effective as
of the date CC0 was applied by Affirmer to the Work. Should any part of the
License for any reason be judged legally invalid or ineffective under
applicable law, such partial invalidity or ineffectiveness shall not
invalidate the remainder of the License, and in such case Affirmer hereby
affirms that he or she will not (i) exercise any of his or her remaining
Copyright and Related Rights in the Work or (ii) assert any associated claims
and causes of action with respect to the Work, in either case contrary to
Affirmer's express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or warranties
of any kind concerning the Work, express, implied, statutory or otherwise,
including without limitation warranties of title, merchantability, fitness
for a particular purpose, non infringement, or the absence of latent or
other defects, accuracy, or the present or absence of errors, whether or not
discoverable, all to the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without limitation
any person's Copyright and Related Rights in the Work. Further, Affirmer
disclaims responsibility for obtaining any necessary consents, permissions
or other rights required for any use of the Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to this
CC0 or use of the Work.
For more information, please see
<http://creativecommons.org/publicdomain/zero/1.0/>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,135 @@
#ifndef VULKAN_MEMORY_ALLOCATOR_HPP
#define VULKAN_MEMORY_ALLOCATOR_HPP
#if !defined(AMD_VULKAN_MEMORY_ALLOCATOR_H)
#include <vk_mem_alloc.h>
#endif
#include <vulkan/vulkan.hpp>
#if !defined(VMA_HPP_NAMESPACE)
#define VMA_HPP_NAMESPACE vma
#endif
#define VMA_HPP_NAMESPACE_STRING VULKAN_HPP_STRINGIFY(VMA_HPP_NAMESPACE)
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VMA_HPP_NAMESPACE {
struct Dispatcher {}; // VMA uses function pointers from VmaAllocator instead
class Allocator;
template<class T>
VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher> createUniqueHandle(const T& t) VULKAN_HPP_NOEXCEPT {
return VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher>(t);
}
template<class T, class O>
VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher> createUniqueHandle(const T& t, const O* o) VULKAN_HPP_NOEXCEPT {
return VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher>(t, o);
}
template<class F, class S, class O>
std::pair<VULKAN_HPP_NAMESPACE::UniqueHandle<F, Dispatcher>, VULKAN_HPP_NAMESPACE::UniqueHandle<S, Dispatcher>>
createUniqueHandle(const std::pair<F, S>& t, const O* o) VULKAN_HPP_NOEXCEPT {
return {
VULKAN_HPP_NAMESPACE::UniqueHandle<F, Dispatcher>(t.first, o),
VULKAN_HPP_NAMESPACE::UniqueHandle<S, Dispatcher>(t.second, o)
};
}
template<class T, class UniqueVectorAllocator, class VectorAllocator, class O>
std::vector<VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher>, UniqueVectorAllocator>
createUniqueHandleVector(const std::vector<T, VectorAllocator>& vector, const O* o,
const UniqueVectorAllocator& vectorAllocator) VULKAN_HPP_NOEXCEPT {
std::vector<VULKAN_HPP_NAMESPACE::UniqueHandle<T, Dispatcher>, UniqueVectorAllocator> result(vectorAllocator);
result.reserve(vector.size());
for (const T& t : vector) result.emplace_back(t, o);
return result;
}
template<class T, class Owner> class Deleter {
const Owner* owner;
public:
Deleter() = default;
Deleter(const Owner* owner) VULKAN_HPP_NOEXCEPT : owner(owner) {}
protected:
void destroy(const T& t) VULKAN_HPP_NOEXCEPT; // Implemented manually for each handle type
};
template<class T> class Deleter<T, void> {
protected:
void destroy(const T& t) VULKAN_HPP_NOEXCEPT { t.destroy(); }
};
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct UniqueHandleTraits<Buffer, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<Buffer, VMA_HPP_NAMESPACE::Allocator>;
};
template<> struct UniqueHandleTraits<Image, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<Image, VMA_HPP_NAMESPACE::Allocator>;
};
}
namespace VMA_HPP_NAMESPACE {
using UniqueBuffer = VULKAN_HPP_NAMESPACE::UniqueHandle<VULKAN_HPP_NAMESPACE::Buffer, Dispatcher>;
using UniqueImage = VULKAN_HPP_NAMESPACE::UniqueHandle<VULKAN_HPP_NAMESPACE::Image, Dispatcher>;
}
#endif
#include "vk_mem_alloc_enums.hpp"
#include "vk_mem_alloc_handles.hpp"
#include "vk_mem_alloc_structs.hpp"
#include "vk_mem_alloc_funcs.hpp"
namespace VMA_HPP_NAMESPACE {
#ifndef VULKAN_HPP_NO_SMART_HANDLE
# define VMA_HPP_DESTROY_IMPL(NAME) \
template<> VULKAN_HPP_INLINE void VULKAN_HPP_NAMESPACE::UniqueHandleTraits<NAME, Dispatcher>::deleter::destroy(const NAME& t) VULKAN_HPP_NOEXCEPT
VMA_HPP_DESTROY_IMPL(VULKAN_HPP_NAMESPACE::Buffer) { owner->destroyBuffer(t, nullptr); }
VMA_HPP_DESTROY_IMPL(VULKAN_HPP_NAMESPACE::Image) { owner->destroyImage(t, nullptr); }
VMA_HPP_DESTROY_IMPL(Pool) { owner->destroyPool(t); }
VMA_HPP_DESTROY_IMPL(Allocation) { owner->freeMemory(t); }
VMA_HPP_DESTROY_IMPL(VirtualAllocation) { owner->virtualFree(t); }
# undef VMA_HPP_DESTROY_IMPL
#endif
template<class InstanceDispatcher, class DeviceDispatcher>
VULKAN_HPP_CONSTEXPR VulkanFunctions functionsFromDispatcher(InstanceDispatcher const * instance,
DeviceDispatcher const * device) VULKAN_HPP_NOEXCEPT {
return VulkanFunctions {
instance->vkGetInstanceProcAddr,
instance->vkGetDeviceProcAddr,
instance->vkGetPhysicalDeviceProperties,
instance->vkGetPhysicalDeviceMemoryProperties,
device->vkAllocateMemory,
device->vkFreeMemory,
device->vkMapMemory,
device->vkUnmapMemory,
device->vkFlushMappedMemoryRanges,
device->vkInvalidateMappedMemoryRanges,
device->vkBindBufferMemory,
device->vkBindImageMemory,
device->vkGetBufferMemoryRequirements,
device->vkGetImageMemoryRequirements,
device->vkCreateBuffer,
device->vkDestroyBuffer,
device->vkCreateImage,
device->vkDestroyImage,
device->vkCmdCopyBuffer,
device->vkGetBufferMemoryRequirements2KHR ? device->vkGetBufferMemoryRequirements2KHR : device->vkGetBufferMemoryRequirements2,
device->vkGetImageMemoryRequirements2KHR ? device->vkGetImageMemoryRequirements2KHR : device->vkGetImageMemoryRequirements2,
device->vkBindBufferMemory2KHR ? device->vkBindBufferMemory2KHR : device->vkBindBufferMemory2,
device->vkBindImageMemory2KHR ? device->vkBindImageMemory2KHR : device->vkBindImageMemory2,
instance->vkGetPhysicalDeviceMemoryProperties2KHR ? instance->vkGetPhysicalDeviceMemoryProperties2KHR : instance->vkGetPhysicalDeviceMemoryProperties2,
device->vkGetDeviceBufferMemoryRequirements,
device->vkGetDeviceImageMemoryRequirements
};
}
template<class Dispatch = VULKAN_HPP_DEFAULT_DISPATCHER_TYPE>
VULKAN_HPP_CONSTEXPR VulkanFunctions functionsFromDispatcher(Dispatch const & dispatch
VULKAN_HPP_DEFAULT_DISPATCHER_ASSIGNMENT) VULKAN_HPP_NOEXCEPT {
return functionsFromDispatcher(&dispatch, &dispatch);
}
}
#endif

View File

@ -0,0 +1,450 @@
#ifndef VULKAN_MEMORY_ALLOCATOR_ENUMS_HPP
#define VULKAN_MEMORY_ALLOCATOR_ENUMS_HPP
namespace VMA_HPP_NAMESPACE {
enum class AllocatorCreateFlagBits : VmaAllocatorCreateFlags {
eExternallySynchronized = VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT,
eKhrDedicatedAllocation = VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT,
eKhrBindMemory2 = VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT,
eExtMemoryBudget = VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT,
eAmdDeviceCoherentMemory = VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT,
eBufferDeviceAddress = VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT,
eExtMemoryPriority = VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
};
VULKAN_HPP_INLINE std::string to_string(AllocatorCreateFlagBits value) {
if (value == AllocatorCreateFlagBits::eExternallySynchronized) return "ExternallySynchronized";
if (value == AllocatorCreateFlagBits::eKhrDedicatedAllocation) return "KhrDedicatedAllocation";
if (value == AllocatorCreateFlagBits::eKhrBindMemory2) return "KhrBindMemory2";
if (value == AllocatorCreateFlagBits::eExtMemoryBudget) return "ExtMemoryBudget";
if (value == AllocatorCreateFlagBits::eAmdDeviceCoherentMemory) return "AmdDeviceCoherentMemory";
if (value == AllocatorCreateFlagBits::eBufferDeviceAddress) return "BufferDeviceAddress";
if (value == AllocatorCreateFlagBits::eExtMemoryPriority) return "ExtMemoryPriority";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::AllocatorCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::AllocatorCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eExternallySynchronized
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eKhrDedicatedAllocation
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eKhrBindMemory2
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eExtMemoryBudget
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eAmdDeviceCoherentMemory
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eBufferDeviceAddress
| VMA_HPP_NAMESPACE::AllocatorCreateFlagBits::eExtMemoryPriority;
};
}
namespace VMA_HPP_NAMESPACE {
using AllocatorCreateFlags = VULKAN_HPP_NAMESPACE::Flags<AllocatorCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocatorCreateFlags operator|(AllocatorCreateFlagBits bit0, AllocatorCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocatorCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocatorCreateFlags operator&(AllocatorCreateFlagBits bit0, AllocatorCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocatorCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocatorCreateFlags operator^(AllocatorCreateFlagBits bit0, AllocatorCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocatorCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocatorCreateFlags operator~(AllocatorCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(AllocatorCreateFlags(bits));
}
VULKAN_HPP_INLINE std::string to_string(AllocatorCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & AllocatorCreateFlagBits::eExternallySynchronized) result += "ExternallySynchronized | ";
if (value & AllocatorCreateFlagBits::eKhrDedicatedAllocation) result += "KhrDedicatedAllocation | ";
if (value & AllocatorCreateFlagBits::eKhrBindMemory2) result += "KhrBindMemory2 | ";
if (value & AllocatorCreateFlagBits::eExtMemoryBudget) result += "ExtMemoryBudget | ";
if (value & AllocatorCreateFlagBits::eAmdDeviceCoherentMemory) result += "AmdDeviceCoherentMemory | ";
if (value & AllocatorCreateFlagBits::eBufferDeviceAddress) result += "BufferDeviceAddress | ";
if (value & AllocatorCreateFlagBits::eExtMemoryPriority) result += "ExtMemoryPriority | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
}
namespace VMA_HPP_NAMESPACE {
enum class MemoryUsage {
eUnknown = VMA_MEMORY_USAGE_UNKNOWN,
eGpuOnly = VMA_MEMORY_USAGE_GPU_ONLY,
eCpuOnly = VMA_MEMORY_USAGE_CPU_ONLY,
eCpuToGpu = VMA_MEMORY_USAGE_CPU_TO_GPU,
eGpuToCpu = VMA_MEMORY_USAGE_GPU_TO_CPU,
eCpuCopy = VMA_MEMORY_USAGE_CPU_COPY,
eGpuLazilyAllocated = VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED,
eAuto = VMA_MEMORY_USAGE_AUTO,
eAutoPreferDevice = VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE,
eAutoPreferHost = VMA_MEMORY_USAGE_AUTO_PREFER_HOST
};
VULKAN_HPP_INLINE std::string to_string(MemoryUsage value) {
if (value == MemoryUsage::eUnknown) return "Unknown";
if (value == MemoryUsage::eGpuOnly) return "GpuOnly";
if (value == MemoryUsage::eCpuOnly) return "CpuOnly";
if (value == MemoryUsage::eCpuToGpu) return "CpuToGpu";
if (value == MemoryUsage::eGpuToCpu) return "GpuToCpu";
if (value == MemoryUsage::eCpuCopy) return "CpuCopy";
if (value == MemoryUsage::eGpuLazilyAllocated) return "GpuLazilyAllocated";
if (value == MemoryUsage::eAuto) return "Auto";
if (value == MemoryUsage::eAutoPreferDevice) return "AutoPreferDevice";
if (value == MemoryUsage::eAutoPreferHost) return "AutoPreferHost";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
}
namespace VMA_HPP_NAMESPACE {
enum class AllocationCreateFlagBits : VmaAllocationCreateFlags {
eDedicatedMemory = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT,
eNeverAllocate = VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT,
eMapped = VMA_ALLOCATION_CREATE_MAPPED_BIT,
eUserDataCopyString = VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT,
eUpperAddress = VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT,
eDontBind = VMA_ALLOCATION_CREATE_DONT_BIND_BIT,
eWithinBudget = VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT,
eCanAlias = VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT,
eHostAccessSequentialWrite = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT,
eHostAccessRandom = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT,
eHostAccessAllowTransferInstead = VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT,
eStrategyMinMemory = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
eStrategyMinTime = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
eStrategyMinOffset = VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
eStrategyBestFit = VMA_ALLOCATION_CREATE_STRATEGY_BEST_FIT_BIT,
eStrategyFirstFit = VMA_ALLOCATION_CREATE_STRATEGY_FIRST_FIT_BIT
};
VULKAN_HPP_INLINE std::string to_string(AllocationCreateFlagBits value) {
if (value == AllocationCreateFlagBits::eDedicatedMemory) return "DedicatedMemory";
if (value == AllocationCreateFlagBits::eNeverAllocate) return "NeverAllocate";
if (value == AllocationCreateFlagBits::eMapped) return "Mapped";
if (value == AllocationCreateFlagBits::eUserDataCopyString) return "UserDataCopyString";
if (value == AllocationCreateFlagBits::eUpperAddress) return "UpperAddress";
if (value == AllocationCreateFlagBits::eDontBind) return "DontBind";
if (value == AllocationCreateFlagBits::eWithinBudget) return "WithinBudget";
if (value == AllocationCreateFlagBits::eCanAlias) return "CanAlias";
if (value == AllocationCreateFlagBits::eHostAccessSequentialWrite) return "HostAccessSequentialWrite";
if (value == AllocationCreateFlagBits::eHostAccessRandom) return "HostAccessRandom";
if (value == AllocationCreateFlagBits::eHostAccessAllowTransferInstead) return "HostAccessAllowTransferInstead";
if (value == AllocationCreateFlagBits::eStrategyMinMemory) return "StrategyMinMemory";
if (value == AllocationCreateFlagBits::eStrategyMinTime) return "StrategyMinTime";
if (value == AllocationCreateFlagBits::eStrategyMinOffset) return "StrategyMinOffset";
if (value == AllocationCreateFlagBits::eStrategyBestFit) return "StrategyBestFit";
if (value == AllocationCreateFlagBits::eStrategyFirstFit) return "StrategyFirstFit";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::AllocationCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::AllocationCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eDedicatedMemory
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eNeverAllocate
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eMapped
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eUserDataCopyString
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eUpperAddress
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eDontBind
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eWithinBudget
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eCanAlias
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eHostAccessSequentialWrite
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eHostAccessRandom
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eHostAccessAllowTransferInstead
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyMinMemory
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyMinTime
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyMinOffset
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyBestFit
| VMA_HPP_NAMESPACE::AllocationCreateFlagBits::eStrategyFirstFit;
};
}
namespace VMA_HPP_NAMESPACE {
using AllocationCreateFlags = VULKAN_HPP_NAMESPACE::Flags<AllocationCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocationCreateFlags operator|(AllocationCreateFlagBits bit0, AllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocationCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocationCreateFlags operator&(AllocationCreateFlagBits bit0, AllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocationCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocationCreateFlags operator^(AllocationCreateFlagBits bit0, AllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return AllocationCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR AllocationCreateFlags operator~(AllocationCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(AllocationCreateFlags(bits));
}
VULKAN_HPP_INLINE std::string to_string(AllocationCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & AllocationCreateFlagBits::eDedicatedMemory) result += "DedicatedMemory | ";
if (value & AllocationCreateFlagBits::eNeverAllocate) result += "NeverAllocate | ";
if (value & AllocationCreateFlagBits::eMapped) result += "Mapped | ";
if (value & AllocationCreateFlagBits::eUserDataCopyString) result += "UserDataCopyString | ";
if (value & AllocationCreateFlagBits::eUpperAddress) result += "UpperAddress | ";
if (value & AllocationCreateFlagBits::eDontBind) result += "DontBind | ";
if (value & AllocationCreateFlagBits::eWithinBudget) result += "WithinBudget | ";
if (value & AllocationCreateFlagBits::eCanAlias) result += "CanAlias | ";
if (value & AllocationCreateFlagBits::eHostAccessSequentialWrite) result += "HostAccessSequentialWrite | ";
if (value & AllocationCreateFlagBits::eHostAccessRandom) result += "HostAccessRandom | ";
if (value & AllocationCreateFlagBits::eHostAccessAllowTransferInstead) result += "HostAccessAllowTransferInstead | ";
if (value & AllocationCreateFlagBits::eStrategyMinMemory) result += "StrategyMinMemory | ";
if (value & AllocationCreateFlagBits::eStrategyMinTime) result += "StrategyMinTime | ";
if (value & AllocationCreateFlagBits::eStrategyMinOffset) result += "StrategyMinOffset | ";
if (value & AllocationCreateFlagBits::eStrategyBestFit) result += "StrategyBestFit | ";
if (value & AllocationCreateFlagBits::eStrategyFirstFit) result += "StrategyFirstFit | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
}
namespace VMA_HPP_NAMESPACE {
enum class PoolCreateFlagBits : VmaPoolCreateFlags {
eIgnoreBufferImageGranularity = VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT,
eLinearAlgorithm = VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT
};
VULKAN_HPP_INLINE std::string to_string(PoolCreateFlagBits value) {
if (value == PoolCreateFlagBits::eIgnoreBufferImageGranularity) return "IgnoreBufferImageGranularity";
if (value == PoolCreateFlagBits::eLinearAlgorithm) return "LinearAlgorithm";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::PoolCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::PoolCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::PoolCreateFlagBits::eIgnoreBufferImageGranularity
| VMA_HPP_NAMESPACE::PoolCreateFlagBits::eLinearAlgorithm;
};
}
namespace VMA_HPP_NAMESPACE {
using PoolCreateFlags = VULKAN_HPP_NAMESPACE::Flags<PoolCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR PoolCreateFlags operator|(PoolCreateFlagBits bit0, PoolCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return PoolCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR PoolCreateFlags operator&(PoolCreateFlagBits bit0, PoolCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return PoolCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR PoolCreateFlags operator^(PoolCreateFlagBits bit0, PoolCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return PoolCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR PoolCreateFlags operator~(PoolCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(PoolCreateFlags(bits));
}
VULKAN_HPP_INLINE std::string to_string(PoolCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & PoolCreateFlagBits::eIgnoreBufferImageGranularity) result += "IgnoreBufferImageGranularity | ";
if (value & PoolCreateFlagBits::eLinearAlgorithm) result += "LinearAlgorithm | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
}
namespace VMA_HPP_NAMESPACE {
enum class DefragmentationFlagBits : VmaDefragmentationFlags {
eFlagAlgorithmFast = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT,
eFlagAlgorithmBalanced = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT,
eFlagAlgorithmFull = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT,
eFlagAlgorithmExtensive = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT
};
VULKAN_HPP_INLINE std::string to_string(DefragmentationFlagBits value) {
if (value == DefragmentationFlagBits::eFlagAlgorithmFast) return "FlagAlgorithmFast";
if (value == DefragmentationFlagBits::eFlagAlgorithmBalanced) return "FlagAlgorithmBalanced";
if (value == DefragmentationFlagBits::eFlagAlgorithmFull) return "FlagAlgorithmFull";
if (value == DefragmentationFlagBits::eFlagAlgorithmExtensive) return "FlagAlgorithmExtensive";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::DefragmentationFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::DefragmentationFlagBits> allFlags =
VMA_HPP_NAMESPACE::DefragmentationFlagBits::eFlagAlgorithmFast
| VMA_HPP_NAMESPACE::DefragmentationFlagBits::eFlagAlgorithmBalanced
| VMA_HPP_NAMESPACE::DefragmentationFlagBits::eFlagAlgorithmFull
| VMA_HPP_NAMESPACE::DefragmentationFlagBits::eFlagAlgorithmExtensive;
};
}
namespace VMA_HPP_NAMESPACE {
using DefragmentationFlags = VULKAN_HPP_NAMESPACE::Flags<DefragmentationFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR DefragmentationFlags operator|(DefragmentationFlagBits bit0, DefragmentationFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return DefragmentationFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR DefragmentationFlags operator&(DefragmentationFlagBits bit0, DefragmentationFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return DefragmentationFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR DefragmentationFlags operator^(DefragmentationFlagBits bit0, DefragmentationFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return DefragmentationFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR DefragmentationFlags operator~(DefragmentationFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(DefragmentationFlags(bits));
}
VULKAN_HPP_INLINE std::string to_string(DefragmentationFlags value) {
if (!value) return "{}";
std::string result;
if (value & DefragmentationFlagBits::eFlagAlgorithmFast) result += "FlagAlgorithmFast | ";
if (value & DefragmentationFlagBits::eFlagAlgorithmBalanced) result += "FlagAlgorithmBalanced | ";
if (value & DefragmentationFlagBits::eFlagAlgorithmFull) result += "FlagAlgorithmFull | ";
if (value & DefragmentationFlagBits::eFlagAlgorithmExtensive) result += "FlagAlgorithmExtensive | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
}
namespace VMA_HPP_NAMESPACE {
enum class DefragmentationMoveOperation {
eCopy = VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY,
eIgnore = VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE,
eDestroy = VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY
};
VULKAN_HPP_INLINE std::string to_string(DefragmentationMoveOperation value) {
if (value == DefragmentationMoveOperation::eCopy) return "Copy";
if (value == DefragmentationMoveOperation::eIgnore) return "Ignore";
if (value == DefragmentationMoveOperation::eDestroy) return "Destroy";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
}
namespace VMA_HPP_NAMESPACE {
enum class VirtualBlockCreateFlagBits : VmaVirtualBlockCreateFlags {
eLinearAlgorithm = VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT
};
VULKAN_HPP_INLINE std::string to_string(VirtualBlockCreateFlagBits value) {
if (value == VirtualBlockCreateFlagBits::eLinearAlgorithm) return "LinearAlgorithm";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::VirtualBlockCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::VirtualBlockCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::VirtualBlockCreateFlagBits::eLinearAlgorithm;
};
}
namespace VMA_HPP_NAMESPACE {
using VirtualBlockCreateFlags = VULKAN_HPP_NAMESPACE::Flags<VirtualBlockCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualBlockCreateFlags operator|(VirtualBlockCreateFlagBits bit0, VirtualBlockCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualBlockCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualBlockCreateFlags operator&(VirtualBlockCreateFlagBits bit0, VirtualBlockCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualBlockCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualBlockCreateFlags operator^(VirtualBlockCreateFlagBits bit0, VirtualBlockCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualBlockCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualBlockCreateFlags operator~(VirtualBlockCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(VirtualBlockCreateFlags(bits));
}
VULKAN_HPP_INLINE std::string to_string(VirtualBlockCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & VirtualBlockCreateFlagBits::eLinearAlgorithm) result += "LinearAlgorithm | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
}
namespace VMA_HPP_NAMESPACE {
enum class VirtualAllocationCreateFlagBits : VmaVirtualAllocationCreateFlags {
eUpperAddress = VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT,
eStrategyMinMemory = VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
eStrategyMinTime = VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
eStrategyMinOffset = VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT
};
VULKAN_HPP_INLINE std::string to_string(VirtualAllocationCreateFlagBits value) {
if (value == VirtualAllocationCreateFlagBits::eUpperAddress) return "UpperAddress";
if (value == VirtualAllocationCreateFlagBits::eStrategyMinMemory) return "StrategyMinMemory";
if (value == VirtualAllocationCreateFlagBits::eStrategyMinTime) return "StrategyMinTime";
if (value == VirtualAllocationCreateFlagBits::eStrategyMinOffset) return "StrategyMinOffset";
return "invalid ( " + VULKAN_HPP_NAMESPACE::toHexString(static_cast<uint32_t>(value)) + " )";
}
}
namespace VULKAN_HPP_NAMESPACE {
template<> struct FlagTraits<VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits> {
static VULKAN_HPP_CONST_OR_CONSTEXPR bool isBitmask = true;
static VULKAN_HPP_CONST_OR_CONSTEXPR Flags<VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits> allFlags =
VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits::eUpperAddress
| VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits::eStrategyMinMemory
| VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits::eStrategyMinTime
| VMA_HPP_NAMESPACE::VirtualAllocationCreateFlagBits::eStrategyMinOffset;
};
}
namespace VMA_HPP_NAMESPACE {
using VirtualAllocationCreateFlags = VULKAN_HPP_NAMESPACE::Flags<VirtualAllocationCreateFlagBits>;
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualAllocationCreateFlags operator|(VirtualAllocationCreateFlagBits bit0, VirtualAllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualAllocationCreateFlags(bit0) | bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualAllocationCreateFlags operator&(VirtualAllocationCreateFlagBits bit0, VirtualAllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualAllocationCreateFlags(bit0) & bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualAllocationCreateFlags operator^(VirtualAllocationCreateFlagBits bit0, VirtualAllocationCreateFlagBits bit1) VULKAN_HPP_NOEXCEPT {
return VirtualAllocationCreateFlags(bit0) ^ bit1;
}
VULKAN_HPP_INLINE VULKAN_HPP_CONSTEXPR VirtualAllocationCreateFlags operator~(VirtualAllocationCreateFlagBits bits) VULKAN_HPP_NOEXCEPT {
return ~(VirtualAllocationCreateFlags(bits));
}
VULKAN_HPP_INLINE std::string to_string(VirtualAllocationCreateFlags value) {
if (!value) return "{}";
std::string result;
if (value & VirtualAllocationCreateFlagBits::eUpperAddress) result += "UpperAddress | ";
if (value & VirtualAllocationCreateFlagBits::eStrategyMinMemory) result += "StrategyMinMemory | ";
if (value & VirtualAllocationCreateFlagBits::eStrategyMinTime) result += "StrategyMinTime | ";
if (value & VirtualAllocationCreateFlagBits::eStrategyMinOffset) result += "StrategyMinOffset | ";
return "{ " + result.substr( 0, result.size() - 3 ) + " }";
}
}
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,929 @@
#ifndef VULKAN_MEMORY_ALLOCATOR_HANDLES_HPP
#define VULKAN_MEMORY_ALLOCATOR_HANDLES_HPP
namespace VMA_HPP_NAMESPACE {
struct DeviceMemoryCallbacks;
struct VulkanFunctions;
struct AllocatorCreateInfo;
struct AllocatorInfo;
struct Statistics;
struct DetailedStatistics;
struct TotalStatistics;
struct Budget;
struct AllocationCreateInfo;
struct PoolCreateInfo;
struct AllocationInfo;
struct DefragmentationInfo;
struct DefragmentationMove;
struct DefragmentationPassMoveInfo;
struct DefragmentationStats;
struct VirtualBlockCreateInfo;
struct VirtualAllocationCreateInfo;
struct VirtualAllocationInfo;
class Pool;
class Allocation;
class DefragmentationContext;
class VirtualAllocation;
class Allocator;
class VirtualBlock;
}
namespace VMA_HPP_NAMESPACE {
class Pool {
public:
using CType = VmaPool;
using NativeType = VmaPool;
public:
VULKAN_HPP_CONSTEXPR Pool() = default;
VULKAN_HPP_CONSTEXPR Pool(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT Pool(VmaPool pool) VULKAN_HPP_NOEXCEPT : m_pool(pool) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
Pool& operator=(VmaPool pool) VULKAN_HPP_NOEXCEPT {
m_pool = pool;
return *this;
}
#endif
Pool& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_pool = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(Pool const &) const = default;
#else
bool operator==(Pool const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_pool == rhs.m_pool;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaPool() const VULKAN_HPP_NOEXCEPT {
return m_pool;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_pool != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_pool == VK_NULL_HANDLE;
}
private:
VmaPool m_pool = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(Pool) == sizeof(VmaPool),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> struct UniqueHandleTraits<VMA_HPP_NAMESPACE::Pool, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::Pool, VMA_HPP_NAMESPACE::Allocator>;
};
}
namespace VMA_HPP_NAMESPACE { using UniquePool = VULKAN_HPP_NAMESPACE::UniqueHandle<Pool, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class Allocation {
public:
using CType = VmaAllocation;
using NativeType = VmaAllocation;
public:
VULKAN_HPP_CONSTEXPR Allocation() = default;
VULKAN_HPP_CONSTEXPR Allocation(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT Allocation(VmaAllocation allocation) VULKAN_HPP_NOEXCEPT : m_allocation(allocation) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
Allocation& operator=(VmaAllocation allocation) VULKAN_HPP_NOEXCEPT {
m_allocation = allocation;
return *this;
}
#endif
Allocation& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_allocation = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(Allocation const &) const = default;
#else
bool operator==(Allocation const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_allocation == rhs.m_allocation;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaAllocation() const VULKAN_HPP_NOEXCEPT {
return m_allocation;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_allocation != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_allocation == VK_NULL_HANDLE;
}
private:
VmaAllocation m_allocation = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(Allocation) == sizeof(VmaAllocation),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> struct UniqueHandleTraits<VMA_HPP_NAMESPACE::Allocation, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::Allocation, VMA_HPP_NAMESPACE::Allocator>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueAllocation = VULKAN_HPP_NAMESPACE::UniqueHandle<Allocation, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class DefragmentationContext {
public:
using CType = VmaDefragmentationContext;
using NativeType = VmaDefragmentationContext;
public:
VULKAN_HPP_CONSTEXPR DefragmentationContext() = default;
VULKAN_HPP_CONSTEXPR DefragmentationContext(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT DefragmentationContext(VmaDefragmentationContext defragmentationContext) VULKAN_HPP_NOEXCEPT : m_defragmentationContext(defragmentationContext) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
DefragmentationContext& operator=(VmaDefragmentationContext defragmentationContext) VULKAN_HPP_NOEXCEPT {
m_defragmentationContext = defragmentationContext;
return *this;
}
#endif
DefragmentationContext& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_defragmentationContext = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(DefragmentationContext const &) const = default;
#else
bool operator==(DefragmentationContext const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_defragmentationContext == rhs.m_defragmentationContext;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaDefragmentationContext() const VULKAN_HPP_NOEXCEPT {
return m_defragmentationContext;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_defragmentationContext != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_defragmentationContext == VK_NULL_HANDLE;
}
private:
VmaDefragmentationContext m_defragmentationContext = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(DefragmentationContext) == sizeof(VmaDefragmentationContext),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> struct UniqueHandleTraits<VMA_HPP_NAMESPACE::DefragmentationContext, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::DefragmentationContext, void>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueDefragmentationContext = VULKAN_HPP_NAMESPACE::UniqueHandle<DefragmentationContext, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class Allocator {
public:
using CType = VmaAllocator;
using NativeType = VmaAllocator;
public:
VULKAN_HPP_CONSTEXPR Allocator() = default;
VULKAN_HPP_CONSTEXPR Allocator(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT Allocator(VmaAllocator allocator) VULKAN_HPP_NOEXCEPT : m_allocator(allocator) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
Allocator& operator=(VmaAllocator allocator) VULKAN_HPP_NOEXCEPT {
m_allocator = allocator;
return *this;
}
#endif
Allocator& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_allocator = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(Allocator const &) const = default;
#else
bool operator==(Allocator const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_allocator == rhs.m_allocator;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaAllocator() const VULKAN_HPP_NOEXCEPT {
return m_allocator;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_allocator != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_allocator == VK_NULL_HANDLE;
}
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroy() const;
#else
void destroy() const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS AllocatorInfo getAllocatorInfo() const;
#endif
void getAllocatorInfo(AllocatorInfo* allocatorInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS const VULKAN_HPP_NAMESPACE::PhysicalDeviceProperties* getPhysicalDeviceProperties() const;
#endif
void getPhysicalDeviceProperties(const VULKAN_HPP_NAMESPACE::PhysicalDeviceProperties** physicalDeviceProperties) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS const VULKAN_HPP_NAMESPACE::PhysicalDeviceMemoryProperties* getMemoryProperties() const;
#endif
void getMemoryProperties(const VULKAN_HPP_NAMESPACE::PhysicalDeviceMemoryProperties** physicalDeviceMemoryProperties) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS VULKAN_HPP_NAMESPACE::MemoryPropertyFlags getMemoryTypeProperties(uint32_t memoryTypeIndex) const;
#endif
void getMemoryTypeProperties(uint32_t memoryTypeIndex,
VULKAN_HPP_NAMESPACE::MemoryPropertyFlags* flags) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setCurrentFrameIndex(uint32_t frameIndex) const;
#else
void setCurrentFrameIndex(uint32_t frameIndex) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS TotalStatistics calculateStatistics() const;
#endif
void calculateStatistics(TotalStatistics* stats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
template<typename VectorAllocator = std::allocator<Budget>,
typename B = VectorAllocator,
typename std::enable_if<std::is_same<typename B::value_type, Budget>::value, int>::type = 0>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS std::vector<Budget, VectorAllocator> getHeapBudgets(VectorAllocator& vectorAllocator) const;
template<typename VectorAllocator = std::allocator<Budget>>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS std::vector<Budget, VectorAllocator> getHeapBudgets() const;
#endif
void getHeapBudgets(Budget* budgets) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<uint32_t>::type findMemoryTypeIndex(uint32_t memoryTypeBits,
const AllocationCreateInfo& allocationCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result findMemoryTypeIndex(uint32_t memoryTypeBits,
const AllocationCreateInfo* allocationCreateInfo,
uint32_t* memoryTypeIndex) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<uint32_t>::type findMemoryTypeIndexForBufferInfo(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result findMemoryTypeIndexForBufferInfo(const VULKAN_HPP_NAMESPACE::BufferCreateInfo* bufferCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
uint32_t* memoryTypeIndex) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<uint32_t>::type findMemoryTypeIndexForImageInfo(const VULKAN_HPP_NAMESPACE::ImageCreateInfo& imageCreateInfo,
const AllocationCreateInfo& allocationCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result findMemoryTypeIndexForImageInfo(const VULKAN_HPP_NAMESPACE::ImageCreateInfo* imageCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
uint32_t* memoryTypeIndex) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Pool>::type createPool(const PoolCreateInfo& createInfo) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniquePool>::type createPoolUnique(const PoolCreateInfo& createInfo) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createPool(const PoolCreateInfo* createInfo,
Pool* pool) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroyPool(Pool pool) const;
#else
void destroyPool(Pool pool) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS Statistics getPoolStatistics(Pool pool) const;
#endif
void getPoolStatistics(Pool pool,
Statistics* poolStats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS DetailedStatistics calculatePoolStatistics(Pool pool) const;
#endif
void calculatePoolStatistics(Pool pool,
DetailedStatistics* poolStats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type checkPoolCorruption(Pool pool) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result checkPoolCorruption(Pool pool) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS const char* getPoolName(Pool pool) const;
#endif
void getPoolName(Pool pool,
const char** name) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setPoolName(Pool pool,
const char* name) const;
#else
void setPoolName(Pool pool,
const char* name) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Allocation>::type allocateMemory(const VULKAN_HPP_NAMESPACE::MemoryRequirements& vkMemoryRequirements,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueAllocation>::type allocateMemoryUnique(const VULKAN_HPP_NAMESPACE::MemoryRequirements& vkMemoryRequirements,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result allocateMemory(const VULKAN_HPP_NAMESPACE::MemoryRequirements* vkMemoryRequirements,
const AllocationCreateInfo* createInfo,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
template<typename VectorAllocator = std::allocator<Allocation>,
typename B = VectorAllocator,
typename std::enable_if<std::is_same<typename B::value_type, Allocation>::value, int>::type = 0>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::vector<Allocation, VectorAllocator>>::type allocateMemoryPages(VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::MemoryRequirements> vkMemoryRequirements,
VULKAN_HPP_NAMESPACE::ArrayProxy<const AllocationCreateInfo> createInfo,
VULKAN_HPP_NAMESPACE::ArrayProxyNoTemporaries<AllocationInfo> allocationInfo,
VectorAllocator& vectorAllocator) const;
template<typename VectorAllocator = std::allocator<Allocation>>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::vector<Allocation, VectorAllocator>>::type allocateMemoryPages(VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::MemoryRequirements> vkMemoryRequirements,
VULKAN_HPP_NAMESPACE::ArrayProxy<const AllocationCreateInfo> createInfo,
VULKAN_HPP_NAMESPACE::ArrayProxyNoTemporaries<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
template<typename VectorAllocator = std::allocator<UniqueAllocation>,
typename B = VectorAllocator,
typename std::enable_if<std::is_same<typename B::value_type, UniqueAllocation>::value, int>::type = 0>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::vector<UniqueAllocation, VectorAllocator>>::type allocateMemoryPagesUnique(VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::MemoryRequirements> vkMemoryRequirements,
VULKAN_HPP_NAMESPACE::ArrayProxy<const AllocationCreateInfo> createInfo,
VULKAN_HPP_NAMESPACE::ArrayProxyNoTemporaries<AllocationInfo> allocationInfo,
VectorAllocator& vectorAllocator) const;
template<typename VectorAllocator = std::allocator<UniqueAllocation>>
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::vector<UniqueAllocation, VectorAllocator>>::type allocateMemoryPagesUnique(VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::MemoryRequirements> vkMemoryRequirements,
VULKAN_HPP_NAMESPACE::ArrayProxy<const AllocationCreateInfo> createInfo,
VULKAN_HPP_NAMESPACE::ArrayProxyNoTemporaries<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result allocateMemoryPages(const VULKAN_HPP_NAMESPACE::MemoryRequirements* vkMemoryRequirements,
const AllocationCreateInfo* createInfo,
size_t allocationCount,
Allocation* allocations,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Allocation>::type allocateMemoryForBuffer(VULKAN_HPP_NAMESPACE::Buffer buffer,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueAllocation>::type allocateMemoryForBufferUnique(VULKAN_HPP_NAMESPACE::Buffer buffer,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result allocateMemoryForBuffer(VULKAN_HPP_NAMESPACE::Buffer buffer,
const AllocationCreateInfo* createInfo,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Allocation>::type allocateMemoryForImage(VULKAN_HPP_NAMESPACE::Image image,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueAllocation>::type allocateMemoryForImageUnique(VULKAN_HPP_NAMESPACE::Image image,
const AllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result allocateMemoryForImage(VULKAN_HPP_NAMESPACE::Image image,
const AllocationCreateInfo* createInfo,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void freeMemory(const Allocation allocation) const;
#else
void freeMemory(const Allocation allocation) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void freeMemoryPages(VULKAN_HPP_NAMESPACE::ArrayProxy<const Allocation> allocations) const;
#endif
void freeMemoryPages(size_t allocationCount,
const Allocation* allocations) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS AllocationInfo getAllocationInfo(Allocation allocation) const;
#endif
void getAllocationInfo(Allocation allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setAllocationUserData(Allocation allocation,
void* userData) const;
#else
void setAllocationUserData(Allocation allocation,
void* userData) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setAllocationName(Allocation allocation,
const char* name) const;
#else
void setAllocationName(Allocation allocation,
const char* name) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS VULKAN_HPP_NAMESPACE::MemoryPropertyFlags getAllocationMemoryProperties(Allocation allocation) const;
#endif
void getAllocationMemoryProperties(Allocation allocation,
VULKAN_HPP_NAMESPACE::MemoryPropertyFlags* flags) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<void*>::type mapMemory(Allocation allocation) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result mapMemory(Allocation allocation,
void** data) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void unmapMemory(Allocation allocation) const;
#else
void unmapMemory(Allocation allocation) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type flushAllocation(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize offset,
VULKAN_HPP_NAMESPACE::DeviceSize size) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result flushAllocation(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize offset,
VULKAN_HPP_NAMESPACE::DeviceSize size) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type invalidateAllocation(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize offset,
VULKAN_HPP_NAMESPACE::DeviceSize size) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result invalidateAllocation(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize offset,
VULKAN_HPP_NAMESPACE::DeviceSize size) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type flushAllocations(VULKAN_HPP_NAMESPACE::ArrayProxy<const Allocation> allocations,
VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::DeviceSize> offsets,
VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::DeviceSize> sizes) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result flushAllocations(uint32_t allocationCount,
const Allocation* allocations,
const VULKAN_HPP_NAMESPACE::DeviceSize* offsets,
const VULKAN_HPP_NAMESPACE::DeviceSize* sizes) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type invalidateAllocations(VULKAN_HPP_NAMESPACE::ArrayProxy<const Allocation> allocations,
VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::DeviceSize> offsets,
VULKAN_HPP_NAMESPACE::ArrayProxy<const VULKAN_HPP_NAMESPACE::DeviceSize> sizes) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result invalidateAllocations(uint32_t allocationCount,
const Allocation* allocations,
const VULKAN_HPP_NAMESPACE::DeviceSize* offsets,
const VULKAN_HPP_NAMESPACE::DeviceSize* sizes) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type checkCorruption(uint32_t memoryTypeBits) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result checkCorruption(uint32_t memoryTypeBits) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<DefragmentationContext>::type beginDefragmentation(const DefragmentationInfo& info) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result beginDefragmentation(const DefragmentationInfo* info,
DefragmentationContext* context) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void endDefragmentation(DefragmentationContext context,
VULKAN_HPP_NAMESPACE::Optional<DefragmentationStats> stats = nullptr) const;
#endif
void endDefragmentation(DefragmentationContext context,
DefragmentationStats* stats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<DefragmentationPassMoveInfo>::type beginDefragmentationPass(DefragmentationContext context) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result beginDefragmentationPass(DefragmentationContext context,
DefragmentationPassMoveInfo* passInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<DefragmentationPassMoveInfo>::type endDefragmentationPass(DefragmentationContext context) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result endDefragmentationPass(DefragmentationContext context,
DefragmentationPassMoveInfo* passInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type bindBufferMemory(Allocation allocation,
VULKAN_HPP_NAMESPACE::Buffer buffer) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result bindBufferMemory(Allocation allocation,
VULKAN_HPP_NAMESPACE::Buffer buffer) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type bindBufferMemory2(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize allocationLocalOffset,
VULKAN_HPP_NAMESPACE::Buffer buffer,
const void* next) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result bindBufferMemory2(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize allocationLocalOffset,
VULKAN_HPP_NAMESPACE::Buffer buffer,
const void* next) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type bindImageMemory(Allocation allocation,
VULKAN_HPP_NAMESPACE::Image image) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result bindImageMemory(Allocation allocation,
VULKAN_HPP_NAMESPACE::Image image) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
typename VULKAN_HPP_NAMESPACE::ResultValueType<void>::type bindImageMemory2(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize allocationLocalOffset,
VULKAN_HPP_NAMESPACE::Image image,
const void* next) const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result bindImageMemory2(Allocation allocation,
VULKAN_HPP_NAMESPACE::DeviceSize allocationLocalOffset,
VULKAN_HPP_NAMESPACE::Image image,
const void* next) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<VULKAN_HPP_NAMESPACE::Buffer, Allocation>>::type createBuffer(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<UniqueBuffer, UniqueAllocation>>::type createBufferUnique(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createBuffer(const VULKAN_HPP_NAMESPACE::BufferCreateInfo* bufferCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Buffer* buffer,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<VULKAN_HPP_NAMESPACE::Buffer, Allocation>>::type createBufferWithAlignment(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::DeviceSize minAlignment,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<UniqueBuffer, UniqueAllocation>>::type createBufferWithAlignmentUnique(const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::DeviceSize minAlignment,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createBufferWithAlignment(const VULKAN_HPP_NAMESPACE::BufferCreateInfo* bufferCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
VULKAN_HPP_NAMESPACE::DeviceSize minAlignment,
VULKAN_HPP_NAMESPACE::Buffer* buffer,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<VULKAN_HPP_NAMESPACE::Buffer>::type createAliasingBuffer(Allocation allocation,
const VULKAN_HPP_NAMESPACE::BufferCreateInfo& bufferCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createAliasingBuffer(Allocation allocation,
const VULKAN_HPP_NAMESPACE::BufferCreateInfo* bufferCreateInfo,
VULKAN_HPP_NAMESPACE::Buffer* buffer) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroyBuffer(VULKAN_HPP_NAMESPACE::Buffer buffer,
Allocation allocation) const;
#else
void destroyBuffer(VULKAN_HPP_NAMESPACE::Buffer buffer,
Allocation allocation) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<VULKAN_HPP_NAMESPACE::Image, Allocation>>::type createImage(const VULKAN_HPP_NAMESPACE::ImageCreateInfo& imageCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<std::pair<UniqueImage, UniqueAllocation>>::type createImageUnique(const VULKAN_HPP_NAMESPACE::ImageCreateInfo& imageCreateInfo,
const AllocationCreateInfo& allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Optional<AllocationInfo> allocationInfo = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createImage(const VULKAN_HPP_NAMESPACE::ImageCreateInfo* imageCreateInfo,
const AllocationCreateInfo* allocationCreateInfo,
VULKAN_HPP_NAMESPACE::Image* image,
Allocation* allocation,
AllocationInfo* allocationInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<VULKAN_HPP_NAMESPACE::Image>::type createAliasingImage(Allocation allocation,
const VULKAN_HPP_NAMESPACE::ImageCreateInfo& imageCreateInfo) const;
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createAliasingImage(Allocation allocation,
const VULKAN_HPP_NAMESPACE::ImageCreateInfo* imageCreateInfo,
VULKAN_HPP_NAMESPACE::Image* image) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroyImage(VULKAN_HPP_NAMESPACE::Image image,
Allocation allocation) const;
#else
void destroyImage(VULKAN_HPP_NAMESPACE::Image image,
Allocation allocation) const;
#endif
#if VMA_STATS_STRING_ENABLED
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS char* buildStatsString(VULKAN_HPP_NAMESPACE::Bool32 detailedMap) const;
#endif
void buildStatsString(char** statsString,
VULKAN_HPP_NAMESPACE::Bool32 detailedMap) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void freeStatsString(char* statsString) const;
#else
void freeStatsString(char* statsString) const;
#endif
#endif
private:
VmaAllocator m_allocator = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(Allocator) == sizeof(VmaAllocator),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> struct UniqueHandleTraits<VMA_HPP_NAMESPACE::Allocator, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::Allocator, void>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueAllocator = VULKAN_HPP_NAMESPACE::UniqueHandle<Allocator, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class VirtualAllocation {
public:
using CType = VmaVirtualAllocation;
using NativeType = VmaVirtualAllocation;
public:
VULKAN_HPP_CONSTEXPR VirtualAllocation() = default;
VULKAN_HPP_CONSTEXPR VirtualAllocation(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT VirtualAllocation(VmaVirtualAllocation virtualAllocation) VULKAN_HPP_NOEXCEPT : m_virtualAllocation(virtualAllocation) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
VirtualAllocation& operator=(VmaVirtualAllocation virtualAllocation) VULKAN_HPP_NOEXCEPT {
m_virtualAllocation = virtualAllocation;
return *this;
}
#endif
VirtualAllocation& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_virtualAllocation = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(VirtualAllocation const &) const = default;
#else
bool operator==(VirtualAllocation const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_virtualAllocation == rhs.m_virtualAllocation;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaVirtualAllocation() const VULKAN_HPP_NOEXCEPT {
return m_virtualAllocation;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_virtualAllocation != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_virtualAllocation == VK_NULL_HANDLE;
}
private:
VmaVirtualAllocation m_virtualAllocation = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(VirtualAllocation) == sizeof(VmaVirtualAllocation),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> struct UniqueHandleTraits<VMA_HPP_NAMESPACE::VirtualAllocation, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::VirtualAllocation, VMA_HPP_NAMESPACE::VirtualBlock>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueVirtualAllocation = VULKAN_HPP_NAMESPACE::UniqueHandle<VirtualAllocation, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
class VirtualBlock {
public:
using CType = VmaVirtualBlock;
using NativeType = VmaVirtualBlock;
public:
VULKAN_HPP_CONSTEXPR VirtualBlock() = default;
VULKAN_HPP_CONSTEXPR VirtualBlock(std::nullptr_t) VULKAN_HPP_NOEXCEPT {}
VULKAN_HPP_TYPESAFE_EXPLICIT VirtualBlock(VmaVirtualBlock virtualBlock) VULKAN_HPP_NOEXCEPT : m_virtualBlock(virtualBlock) {}
#if defined(VULKAN_HPP_TYPESAFE_CONVERSION)
VirtualBlock& operator=(VmaVirtualBlock virtualBlock) VULKAN_HPP_NOEXCEPT {
m_virtualBlock = virtualBlock;
return *this;
}
#endif
VirtualBlock& operator=(std::nullptr_t) VULKAN_HPP_NOEXCEPT {
m_virtualBlock = {};
return *this;
}
#if defined( VULKAN_HPP_HAS_SPACESHIP_OPERATOR )
auto operator<=>(VirtualBlock const &) const = default;
#else
bool operator==(VirtualBlock const & rhs) const VULKAN_HPP_NOEXCEPT {
return m_virtualBlock == rhs.m_virtualBlock;
}
#endif
VULKAN_HPP_TYPESAFE_EXPLICIT operator VmaVirtualBlock() const VULKAN_HPP_NOEXCEPT {
return m_virtualBlock;
}
explicit operator bool() const VULKAN_HPP_NOEXCEPT {
return m_virtualBlock != VK_NULL_HANDLE;
}
bool operator!() const VULKAN_HPP_NOEXCEPT {
return m_virtualBlock == VK_NULL_HANDLE;
}
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void destroy() const;
#else
void destroy() const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS VULKAN_HPP_NAMESPACE::Bool32 isVirtualBlockEmpty() const;
#else
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Bool32 isVirtualBlockEmpty() const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS VirtualAllocationInfo getVirtualAllocationInfo(VirtualAllocation allocation) const;
#endif
void getVirtualAllocationInfo(VirtualAllocation allocation,
VirtualAllocationInfo* virtualAllocInfo) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<VirtualAllocation>::type virtualAllocate(const VirtualAllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<VULKAN_HPP_NAMESPACE::DeviceSize> offset = nullptr) const;
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueVirtualAllocation>::type virtualAllocateUnique(const VirtualAllocationCreateInfo& createInfo,
VULKAN_HPP_NAMESPACE::Optional<VULKAN_HPP_NAMESPACE::DeviceSize> offset = nullptr) const;
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result virtualAllocate(const VirtualAllocationCreateInfo* createInfo,
VirtualAllocation* allocation,
VULKAN_HPP_NAMESPACE::DeviceSize* offset) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void virtualFree(VirtualAllocation allocation) const;
#else
void virtualFree(VirtualAllocation allocation) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void clearVirtualBlock() const;
#else
void clearVirtualBlock() const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void setVirtualAllocationUserData(VirtualAllocation allocation,
void* userData) const;
#else
void setVirtualAllocationUserData(VirtualAllocation allocation,
void* userData) const;
#endif
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS Statistics getVirtualBlockStatistics() const;
#endif
void getVirtualBlockStatistics(Statistics* stats) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS DetailedStatistics calculateVirtualBlockStatistics() const;
#endif
void calculateVirtualBlockStatistics(DetailedStatistics* stats) const;
#if VMA_STATS_STRING_ENABLED
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS char* buildVirtualBlockStatsString(VULKAN_HPP_NAMESPACE::Bool32 detailedMap) const;
#endif
void buildVirtualBlockStatsString(char** statsString,
VULKAN_HPP_NAMESPACE::Bool32 detailedMap) const;
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
void freeVirtualBlockStatsString(char* statsString) const;
#else
void freeVirtualBlockStatsString(char* statsString) const;
#endif
#endif
private:
VmaVirtualBlock m_virtualBlock = {};
};
VULKAN_HPP_STATIC_ASSERT(sizeof(VirtualBlock) == sizeof(VmaVirtualBlock),
"handle and wrapper have different size!");
}
#ifndef VULKAN_HPP_NO_SMART_HANDLE
namespace VULKAN_HPP_NAMESPACE {
template<> struct UniqueHandleTraits<VMA_HPP_NAMESPACE::VirtualBlock, VMA_HPP_NAMESPACE::Dispatcher> {
using deleter = VMA_HPP_NAMESPACE::Deleter<VMA_HPP_NAMESPACE::VirtualBlock, void>;
};
}
namespace VMA_HPP_NAMESPACE { using UniqueVirtualBlock = VULKAN_HPP_NAMESPACE::UniqueHandle<VirtualBlock, Dispatcher>; }
#endif
namespace VMA_HPP_NAMESPACE {
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<Allocator>::type createAllocator(const AllocatorCreateInfo& createInfo);
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueAllocator>::type createAllocatorUnique(const AllocatorCreateInfo& createInfo);
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createAllocator(const AllocatorCreateInfo* createInfo,
Allocator* allocator);
#ifndef VULKAN_HPP_DISABLE_ENHANCED_MODE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<VirtualBlock>::type createVirtualBlock(const VirtualBlockCreateInfo& createInfo);
#ifndef VULKAN_HPP_NO_SMART_HANDLE
VULKAN_HPP_NODISCARD_WHEN_NO_EXCEPTIONS typename VULKAN_HPP_NAMESPACE::ResultValueType<UniqueVirtualBlock>::type createVirtualBlockUnique(const VirtualBlockCreateInfo& createInfo);
#endif
#endif
VULKAN_HPP_NODISCARD VULKAN_HPP_NAMESPACE::Result createVirtualBlock(const VirtualBlockCreateInfo* createInfo,
VirtualBlock* virtualBlock);
}
#endif

File diff suppressed because it is too large Load Diff

7897
external/stb/stb_image.h vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,2 @@
#define STB_IMAGE_IMPLEMENTATION
#include "stb_image.h"

@ -1 +0,0 @@
Subproject commit 89d366355e6fe1221c9be40bb2cf3716449e9a7e

202
external/vulkan-headers/LICENSE.txt vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,262 @@
//
// File: vk_icd.h
//
/*
* Copyright (c) 2015-2016, 2022 The Khronos Group Inc.
* Copyright (c) 2015-2016, 2022 Valve Corporation
* Copyright (c) 2015-2016, 2022 LunarG, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef VKICD_H
#define VKICD_H
#include "vulkan.h"
#include <stdbool.h>
// Loader-ICD version negotiation API. Versions add the following features:
// Version 0 - Initial. Doesn't support vk_icdGetInstanceProcAddr
// or vk_icdNegotiateLoaderICDInterfaceVersion.
// Version 1 - Add support for vk_icdGetInstanceProcAddr.
// Version 2 - Add Loader/ICD Interface version negotiation
// via vk_icdNegotiateLoaderICDInterfaceVersion.
// Version 3 - Add ICD creation/destruction of KHR_surface objects.
// Version 4 - Add unknown physical device extension querying via
// vk_icdGetPhysicalDeviceProcAddr.
// Version 5 - Tells ICDs that the loader is now paying attention to the
// application version of Vulkan passed into the ApplicationInfo
// structure during vkCreateInstance. This will tell the ICD
// that if the loader is older, it should automatically fail a
// call for any API version > 1.0. Otherwise, the loader will
// manually determine if it can support the expected version.
// Version 6 - Add support for vk_icdEnumerateAdapterPhysicalDevices.
// Version 7 - If an ICD supports any of the following functions, they must be
// queryable with vk_icdGetInstanceProcAddr:
// vk_icdNegotiateLoaderICDInterfaceVersion
// vk_icdGetPhysicalDeviceProcAddr
// vk_icdEnumerateAdapterPhysicalDevices (Windows only)
// In addition, these functions no longer need to be exported directly.
// This version allows drivers provided through the extension
// VK_LUNARG_direct_driver_loading be able to support the entire
// Driver-Loader interface.
#define CURRENT_LOADER_ICD_INTERFACE_VERSION 7
#define MIN_SUPPORTED_LOADER_ICD_INTERFACE_VERSION 0
#define MIN_PHYS_DEV_EXTENSION_ICD_INTERFACE_VERSION 4
// Old typedefs that don't follow a proper naming convention but are preserved for compatibility
typedef VkResult(VKAPI_PTR *PFN_vkNegotiateLoaderICDInterfaceVersion)(uint32_t *pVersion);
// This is defined in vk_layer.h which will be found by the loader, but if an ICD is building against this
// file directly, it won't be found.
#ifndef PFN_GetPhysicalDeviceProcAddr
typedef PFN_vkVoidFunction(VKAPI_PTR *PFN_GetPhysicalDeviceProcAddr)(VkInstance instance, const char *pName);
#endif
// Typedefs for loader/ICD interface
typedef VkResult (VKAPI_PTR *PFN_vk_icdNegotiateLoaderICDInterfaceVersion)(uint32_t* pVersion);
typedef PFN_vkVoidFunction (VKAPI_PTR *PFN_vk_icdGetInstanceProcAddr)(VkInstance instance, const char* pName);
typedef PFN_vkVoidFunction (VKAPI_PTR *PFN_vk_icdGetPhysicalDeviceProcAddr)(VkInstance instance, const char* pName);
#if defined(VK_USE_PLATFORM_WIN32_KHR)
typedef VkResult (VKAPI_PTR *PFN_vk_icdEnumerateAdapterPhysicalDevices)(VkInstance instance, LUID adapterLUID,
uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices);
#endif
// Prototypes for loader/ICD interface
#if !defined(VK_NO_PROTOTYPES)
#ifdef __cplusplus
extern "C" {
#endif
VKAPI_ATTR VkResult VKAPI_CALL vk_icdNegotiateLoaderICDInterfaceVersion(uint32_t* pVersion);
VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vk_icdGetInstanceProcAddr(VkInstance instance, const char* pName);
VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vk_icdGetPhysicalDeviceProcAddr(VkInstance isntance, const char* pName);
#if defined(VK_USE_PLATFORM_WIN32_KHR)
VKAPI_ATTR VkResult VKAPI_CALL vk_icdEnumerateAdapterPhysicalDevices(VkInstance instance, LUID adapterLUID,
uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices);
#endif
#ifdef __cplusplus
}
#endif
#endif
/*
* The ICD must reserve space for a pointer for the loader's dispatch
* table, at the start of <each object>.
* The ICD must initialize this variable using the SET_LOADER_MAGIC_VALUE macro.
*/
#define ICD_LOADER_MAGIC 0x01CDC0DE
typedef union {
uintptr_t loaderMagic;
void *loaderData;
} VK_LOADER_DATA;
static inline void set_loader_magic_value(void *pNewObject) {
VK_LOADER_DATA *loader_info = (VK_LOADER_DATA *)pNewObject;
loader_info->loaderMagic = ICD_LOADER_MAGIC;
}
static inline bool valid_loader_magic_value(void *pNewObject) {
const VK_LOADER_DATA *loader_info = (VK_LOADER_DATA *)pNewObject;
return (loader_info->loaderMagic & 0xffffffff) == ICD_LOADER_MAGIC;
}
/*
* Windows and Linux ICDs will treat VkSurfaceKHR as a pointer to a struct that
* contains the platform-specific connection and surface information.
*/
typedef enum {
VK_ICD_WSI_PLATFORM_MIR,
VK_ICD_WSI_PLATFORM_WAYLAND,
VK_ICD_WSI_PLATFORM_WIN32,
VK_ICD_WSI_PLATFORM_XCB,
VK_ICD_WSI_PLATFORM_XLIB,
VK_ICD_WSI_PLATFORM_ANDROID,
VK_ICD_WSI_PLATFORM_MACOS,
VK_ICD_WSI_PLATFORM_IOS,
VK_ICD_WSI_PLATFORM_DISPLAY,
VK_ICD_WSI_PLATFORM_HEADLESS,
VK_ICD_WSI_PLATFORM_METAL,
VK_ICD_WSI_PLATFORM_DIRECTFB,
VK_ICD_WSI_PLATFORM_VI,
VK_ICD_WSI_PLATFORM_GGP,
VK_ICD_WSI_PLATFORM_SCREEN,
VK_ICD_WSI_PLATFORM_FUCHSIA,
} VkIcdWsiPlatform;
typedef struct {
VkIcdWsiPlatform platform;
} VkIcdSurfaceBase;
#ifdef VK_USE_PLATFORM_MIR_KHR
typedef struct {
VkIcdSurfaceBase base;
MirConnection *connection;
MirSurface *mirSurface;
} VkIcdSurfaceMir;
#endif // VK_USE_PLATFORM_MIR_KHR
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
typedef struct {
VkIcdSurfaceBase base;
struct wl_display *display;
struct wl_surface *surface;
} VkIcdSurfaceWayland;
#endif // VK_USE_PLATFORM_WAYLAND_KHR
#ifdef VK_USE_PLATFORM_WIN32_KHR
typedef struct {
VkIcdSurfaceBase base;
HINSTANCE hinstance;
HWND hwnd;
} VkIcdSurfaceWin32;
#endif // VK_USE_PLATFORM_WIN32_KHR
#ifdef VK_USE_PLATFORM_XCB_KHR
typedef struct {
VkIcdSurfaceBase base;
xcb_connection_t *connection;
xcb_window_t window;
} VkIcdSurfaceXcb;
#endif // VK_USE_PLATFORM_XCB_KHR
#ifdef VK_USE_PLATFORM_XLIB_KHR
typedef struct {
VkIcdSurfaceBase base;
Display *dpy;
Window window;
} VkIcdSurfaceXlib;
#endif // VK_USE_PLATFORM_XLIB_KHR
#ifdef VK_USE_PLATFORM_DIRECTFB_EXT
typedef struct {
VkIcdSurfaceBase base;
IDirectFB *dfb;
IDirectFBSurface *surface;
} VkIcdSurfaceDirectFB;
#endif // VK_USE_PLATFORM_DIRECTFB_EXT
#ifdef VK_USE_PLATFORM_ANDROID_KHR
typedef struct {
VkIcdSurfaceBase base;
struct ANativeWindow *window;
} VkIcdSurfaceAndroid;
#endif // VK_USE_PLATFORM_ANDROID_KHR
#ifdef VK_USE_PLATFORM_MACOS_MVK
typedef struct {
VkIcdSurfaceBase base;
const void *pView;
} VkIcdSurfaceMacOS;
#endif // VK_USE_PLATFORM_MACOS_MVK
#ifdef VK_USE_PLATFORM_IOS_MVK
typedef struct {
VkIcdSurfaceBase base;
const void *pView;
} VkIcdSurfaceIOS;
#endif // VK_USE_PLATFORM_IOS_MVK
#ifdef VK_USE_PLATFORM_GGP
typedef struct {
VkIcdSurfaceBase base;
GgpStreamDescriptor streamDescriptor;
} VkIcdSurfaceGgp;
#endif // VK_USE_PLATFORM_GGP
typedef struct {
VkIcdSurfaceBase base;
VkDisplayModeKHR displayMode;
uint32_t planeIndex;
uint32_t planeStackIndex;
VkSurfaceTransformFlagBitsKHR transform;
float globalAlpha;
VkDisplayPlaneAlphaFlagBitsKHR alphaMode;
VkExtent2D imageExtent;
} VkIcdSurfaceDisplay;
typedef struct {
VkIcdSurfaceBase base;
} VkIcdSurfaceHeadless;
#ifdef VK_USE_PLATFORM_METAL_EXT
typedef struct {
VkIcdSurfaceBase base;
const CAMetalLayer *pLayer;
} VkIcdSurfaceMetal;
#endif // VK_USE_PLATFORM_METAL_EXT
#ifdef VK_USE_PLATFORM_VI_NN
typedef struct {
VkIcdSurfaceBase base;
void *window;
} VkIcdSurfaceVi;
#endif // VK_USE_PLATFORM_VI_NN
#ifdef VK_USE_PLATFORM_SCREEN_QNX
typedef struct {
VkIcdSurfaceBase base;
struct _screen_context *context;
struct _screen_window *window;
} VkIcdSurfaceScreen;
#endif // VK_USE_PLATFORM_SCREEN_QNX
#ifdef VK_USE_PLATFORM_FUCHSIA
typedef struct {
VkIcdSurfaceBase base;
} VkIcdSurfaceImagePipe;
#endif // VK_USE_PLATFORM_FUCHSIA
#endif // VKICD_H

View File

@ -0,0 +1,211 @@
//
// File: vk_layer.h
//
/*
* Copyright (c) 2015-2017 The Khronos Group Inc.
* Copyright (c) 2015-2017 Valve Corporation
* Copyright (c) 2015-2017 LunarG, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
/* Need to define dispatch table
* Core struct can then have ptr to dispatch table at the top
* Along with object ptrs for current and next OBJ
*/
#pragma once
#include "vulkan_core.h"
#if defined(__GNUC__) && __GNUC__ >= 4
#define VK_LAYER_EXPORT __attribute__((visibility("default")))
#elif defined(__SUNPRO_C) && (__SUNPRO_C >= 0x590)
#define VK_LAYER_EXPORT __attribute__((visibility("default")))
#else
#define VK_LAYER_EXPORT
#endif
#define MAX_NUM_UNKNOWN_EXTS 250
// Loader-Layer version negotiation API. Versions add the following features:
// Versions 0/1 - Initial. Doesn't support vk_layerGetPhysicalDeviceProcAddr
// or vk_icdNegotiateLoaderLayerInterfaceVersion.
// Version 2 - Add support for vk_layerGetPhysicalDeviceProcAddr and
// vk_icdNegotiateLoaderLayerInterfaceVersion.
#define CURRENT_LOADER_LAYER_INTERFACE_VERSION 2
#define MIN_SUPPORTED_LOADER_LAYER_INTERFACE_VERSION 1
#define VK_CURRENT_CHAIN_VERSION 1
// Typedef for use in the interfaces below
typedef PFN_vkVoidFunction (VKAPI_PTR *PFN_GetPhysicalDeviceProcAddr)(VkInstance instance, const char* pName);
// Version negotiation values
typedef enum VkNegotiateLayerStructType {
LAYER_NEGOTIATE_UNINTIALIZED = 0,
LAYER_NEGOTIATE_INTERFACE_STRUCT = 1,
} VkNegotiateLayerStructType;
// Version negotiation structures
typedef struct VkNegotiateLayerInterface {
VkNegotiateLayerStructType sType;
void *pNext;
uint32_t loaderLayerInterfaceVersion;
PFN_vkGetInstanceProcAddr pfnGetInstanceProcAddr;
PFN_vkGetDeviceProcAddr pfnGetDeviceProcAddr;
PFN_GetPhysicalDeviceProcAddr pfnGetPhysicalDeviceProcAddr;
} VkNegotiateLayerInterface;
// Version negotiation functions
typedef VkResult (VKAPI_PTR *PFN_vkNegotiateLoaderLayerInterfaceVersion)(VkNegotiateLayerInterface *pVersionStruct);
// Function prototype for unknown physical device extension command
typedef VkResult(VKAPI_PTR *PFN_PhysDevExt)(VkPhysicalDevice phys_device);
// ------------------------------------------------------------------------------------------------
// CreateInstance and CreateDevice support structures
/* Sub type of structure for instance and device loader ext of CreateInfo.
* When sType == VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO
* or sType == VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO
* then VkLayerFunction indicates struct type pointed to by pNext
*/
typedef enum VkLayerFunction_ {
VK_LAYER_LINK_INFO = 0,
VK_LOADER_DATA_CALLBACK = 1,
VK_LOADER_LAYER_CREATE_DEVICE_CALLBACK = 2,
VK_LOADER_FEATURES = 3,
} VkLayerFunction;
typedef struct VkLayerInstanceLink_ {
struct VkLayerInstanceLink_ *pNext;
PFN_vkGetInstanceProcAddr pfnNextGetInstanceProcAddr;
PFN_GetPhysicalDeviceProcAddr pfnNextGetPhysicalDeviceProcAddr;
} VkLayerInstanceLink;
/*
* When creating the device chain the loader needs to pass
* down information about it's device structure needed at
* the end of the chain. Passing the data via the
* VkLayerDeviceInfo avoids issues with finding the
* exact instance being used.
*/
typedef struct VkLayerDeviceInfo_ {
void *device_info;
PFN_vkGetInstanceProcAddr pfnNextGetInstanceProcAddr;
} VkLayerDeviceInfo;
typedef VkResult (VKAPI_PTR *PFN_vkSetInstanceLoaderData)(VkInstance instance,
void *object);
typedef VkResult (VKAPI_PTR *PFN_vkSetDeviceLoaderData)(VkDevice device,
void *object);
typedef VkResult (VKAPI_PTR *PFN_vkLayerCreateDevice)(VkInstance instance, VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo *pCreateInfo,
const VkAllocationCallbacks *pAllocator, VkDevice *pDevice, PFN_vkGetInstanceProcAddr layerGIPA, PFN_vkGetDeviceProcAddr *nextGDPA);
typedef void (VKAPI_PTR *PFN_vkLayerDestroyDevice)(VkDevice physicalDevice, const VkAllocationCallbacks *pAllocator, PFN_vkDestroyDevice destroyFunction);
typedef enum VkLoaderFeastureFlagBits {
VK_LOADER_FEATURE_PHYSICAL_DEVICE_SORTING = 0x00000001,
} VkLoaderFlagBits;
typedef VkFlags VkLoaderFeatureFlags;
typedef struct {
VkStructureType sType; // VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO
const void *pNext;
VkLayerFunction function;
union {
VkLayerInstanceLink *pLayerInfo;
PFN_vkSetInstanceLoaderData pfnSetInstanceLoaderData;
struct {
PFN_vkLayerCreateDevice pfnLayerCreateDevice;
PFN_vkLayerDestroyDevice pfnLayerDestroyDevice;
} layerDevice;
VkLoaderFeatureFlags loaderFeatures;
} u;
} VkLayerInstanceCreateInfo;
typedef struct VkLayerDeviceLink_ {
struct VkLayerDeviceLink_ *pNext;
PFN_vkGetInstanceProcAddr pfnNextGetInstanceProcAddr;
PFN_vkGetDeviceProcAddr pfnNextGetDeviceProcAddr;
} VkLayerDeviceLink;
typedef struct {
VkStructureType sType; // VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO
const void *pNext;
VkLayerFunction function;
union {
VkLayerDeviceLink *pLayerInfo;
PFN_vkSetDeviceLoaderData pfnSetDeviceLoaderData;
} u;
} VkLayerDeviceCreateInfo;
#ifdef __cplusplus
extern "C" {
#endif
VKAPI_ATTR VkResult VKAPI_CALL vkNegotiateLoaderLayerInterfaceVersion(VkNegotiateLayerInterface *pVersionStruct);
typedef enum VkChainType {
VK_CHAIN_TYPE_UNKNOWN = 0,
VK_CHAIN_TYPE_ENUMERATE_INSTANCE_EXTENSION_PROPERTIES = 1,
VK_CHAIN_TYPE_ENUMERATE_INSTANCE_LAYER_PROPERTIES = 2,
VK_CHAIN_TYPE_ENUMERATE_INSTANCE_VERSION = 3,
} VkChainType;
typedef struct VkChainHeader {
VkChainType type;
uint32_t version;
uint32_t size;
} VkChainHeader;
typedef struct VkEnumerateInstanceExtensionPropertiesChain {
VkChainHeader header;
VkResult(VKAPI_PTR *pfnNextLayer)(const struct VkEnumerateInstanceExtensionPropertiesChain *, const char *, uint32_t *,
VkExtensionProperties *);
const struct VkEnumerateInstanceExtensionPropertiesChain *pNextLink;
#if defined(__cplusplus)
inline VkResult CallDown(const char *pLayerName, uint32_t *pPropertyCount, VkExtensionProperties *pProperties) const {
return pfnNextLayer(pNextLink, pLayerName, pPropertyCount, pProperties);
}
#endif
} VkEnumerateInstanceExtensionPropertiesChain;
typedef struct VkEnumerateInstanceLayerPropertiesChain {
VkChainHeader header;
VkResult(VKAPI_PTR *pfnNextLayer)(const struct VkEnumerateInstanceLayerPropertiesChain *, uint32_t *, VkLayerProperties *);
const struct VkEnumerateInstanceLayerPropertiesChain *pNextLink;
#if defined(__cplusplus)
inline VkResult CallDown(uint32_t *pPropertyCount, VkLayerProperties *pProperties) const {
return pfnNextLayer(pNextLink, pPropertyCount, pProperties);
}
#endif
} VkEnumerateInstanceLayerPropertiesChain;
typedef struct VkEnumerateInstanceVersionChain {
VkChainHeader header;
VkResult(VKAPI_PTR *pfnNextLayer)(const struct VkEnumerateInstanceVersionChain *, uint32_t *);
const struct VkEnumerateInstanceVersionChain *pNextLink;
#if defined(__cplusplus)
inline VkResult CallDown(uint32_t *pApiVersion) const {
return pfnNextLayer(pNextLink, pApiVersion);
}
#endif
} VkEnumerateInstanceVersionChain;
#ifdef __cplusplus
}
#endif

View File

@ -0,0 +1,84 @@
//
// File: vk_platform.h
//
/*
** Copyright 2014-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
#ifndef VK_PLATFORM_H_
#define VK_PLATFORM_H_
#ifdef __cplusplus
extern "C"
{
#endif // __cplusplus
/*
***************************************************************************************************
* Platform-specific directives and type declarations
***************************************************************************************************
*/
/* Platform-specific calling convention macros.
*
* Platforms should define these so that Vulkan clients call Vulkan commands
* with the same calling conventions that the Vulkan implementation expects.
*
* VKAPI_ATTR - Placed before the return type in function declarations.
* Useful for C++11 and GCC/Clang-style function attribute syntax.
* VKAPI_CALL - Placed after the return type in function declarations.
* Useful for MSVC-style calling convention syntax.
* VKAPI_PTR - Placed between the '(' and '*' in function pointer types.
*
* Function declaration: VKAPI_ATTR void VKAPI_CALL vkCommand(void);
* Function pointer type: typedef void (VKAPI_PTR *PFN_vkCommand)(void);
*/
#if defined(_WIN32)
// On Windows, Vulkan commands use the stdcall convention
#define VKAPI_ATTR
#define VKAPI_CALL __stdcall
#define VKAPI_PTR VKAPI_CALL
#elif defined(__ANDROID__) && defined(__ARM_ARCH) && __ARM_ARCH < 7
#error "Vulkan is not supported for the 'armeabi' NDK ABI"
#elif defined(__ANDROID__) && defined(__ARM_ARCH) && __ARM_ARCH >= 7 && defined(__ARM_32BIT_STATE)
// On Android 32-bit ARM targets, Vulkan functions use the "hardfloat"
// calling convention, i.e. float parameters are passed in registers. This
// is true even if the rest of the application passes floats on the stack,
// as it does by default when compiling for the armeabi-v7a NDK ABI.
#define VKAPI_ATTR __attribute__((pcs("aapcs-vfp")))
#define VKAPI_CALL
#define VKAPI_PTR VKAPI_ATTR
#else
// On other platforms, use the default calling convention
#define VKAPI_ATTR
#define VKAPI_CALL
#define VKAPI_PTR
#endif
#if !defined(VK_NO_STDDEF_H)
#include <stddef.h>
#endif // !defined(VK_NO_STDDEF_H)
#if !defined(VK_NO_STDINT_H)
#if defined(_MSC_VER) && (_MSC_VER < 1600)
typedef signed __int8 int8_t;
typedef unsigned __int8 uint8_t;
typedef signed __int16 int16_t;
typedef unsigned __int16 uint16_t;
typedef signed __int32 int32_t;
typedef unsigned __int32 uint32_t;
typedef signed __int64 int64_t;
typedef unsigned __int64 uint64_t;
#else
#include <stdint.h>
#endif
#endif // !defined(VK_NO_STDINT_H)
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
#endif

View File

@ -0,0 +1,71 @@
//
// File: vk_sdk_platform.h
//
/*
* Copyright (c) 2015-2016 The Khronos Group Inc.
* Copyright (c) 2015-2016 Valve Corporation
* Copyright (c) 2015-2016 LunarG, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef VK_SDK_PLATFORM_H
#define VK_SDK_PLATFORM_H
#if defined(_WIN32)
#ifndef NOMINMAX
#define NOMINMAX
#endif
#ifndef __cplusplus
#undef inline
#define inline __inline
#endif // __cplusplus
#if (defined(_MSC_VER) && _MSC_VER < 1900 /*vs2015*/)
// C99:
// Microsoft didn't implement C99 in Visual Studio; but started adding it with
// VS2013. However, VS2013 still didn't have snprintf(). The following is a
// work-around (Note: The _CRT_SECURE_NO_WARNINGS macro must be set in the
// "CMakeLists.txt" file).
// NOTE: This is fixed in Visual Studio 2015.
#define snprintf _snprintf
#endif
#define strdup _strdup
#endif // _WIN32
// Check for noexcept support using clang, with fallback to Windows or GCC version numbers
#ifndef NOEXCEPT
#if defined(__clang__)
#if __has_feature(cxx_noexcept)
#define HAS_NOEXCEPT
#endif
#else
#if defined(__GXX_EXPERIMENTAL_CXX0X__) && __GNUC__ * 10 + __GNUC_MINOR__ >= 46
#define HAS_NOEXCEPT
#else
#if defined(_MSC_FULL_VER) && _MSC_FULL_VER >= 190023026 && defined(_HAS_EXCEPTIONS) && _HAS_EXCEPTIONS
#define HAS_NOEXCEPT
#endif
#endif
#endif
#ifdef HAS_NOEXCEPT
#define NOEXCEPT noexcept
#else
#define NOEXCEPT
#endif
#endif
#endif // VK_SDK_PLATFORM_H

View File

@ -0,0 +1,91 @@
#ifndef VULKAN_H_
#define VULKAN_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
#include "vk_platform.h"
#include "vulkan_core.h"
#ifdef VK_USE_PLATFORM_ANDROID_KHR
#include "vulkan_android.h"
#endif
#ifdef VK_USE_PLATFORM_FUCHSIA
#include <zircon/types.h>
#include "vulkan_fuchsia.h"
#endif
#ifdef VK_USE_PLATFORM_IOS_MVK
#include "vulkan_ios.h"
#endif
#ifdef VK_USE_PLATFORM_MACOS_MVK
#include "vulkan_macos.h"
#endif
#ifdef VK_USE_PLATFORM_METAL_EXT
#include "vulkan_metal.h"
#endif
#ifdef VK_USE_PLATFORM_VI_NN
#include "vulkan_vi.h"
#endif
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
#include "vulkan_wayland.h"
#endif
#ifdef VK_USE_PLATFORM_WIN32_KHR
#include <windows.h>
#include "vulkan_win32.h"
#endif
#ifdef VK_USE_PLATFORM_XCB_KHR
#include <xcb/xcb.h>
#include "vulkan_xcb.h"
#endif
#ifdef VK_USE_PLATFORM_XLIB_KHR
#include <X11/Xlib.h>
#include "vulkan_xlib.h"
#endif
#ifdef VK_USE_PLATFORM_DIRECTFB_EXT
#include <directfb.h>
#include "vulkan_directfb.h"
#endif
#ifdef VK_USE_PLATFORM_XLIB_XRANDR_EXT
#include <X11/Xlib.h>
#include <X11/extensions/Xrandr.h>
#include "vulkan_xlib_xrandr.h"
#endif
#ifdef VK_USE_PLATFORM_GGP
#include <ggp_c/vulkan_types.h>
#include "vulkan_ggp.h"
#endif
#ifdef VK_USE_PLATFORM_SCREEN_QNX
#include <screen/screen.h>
#include "vulkan_screen.h"
#endif
#ifdef VK_ENABLE_BETA_EXTENSIONS
#include "vulkan_beta.h"
#endif
#endif // VULKAN_H_

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,125 @@
#ifndef VULKAN_ANDROID_H_
#define VULKAN_ANDROID_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_KHR_android_surface 1
struct ANativeWindow;
#define VK_KHR_ANDROID_SURFACE_SPEC_VERSION 6
#define VK_KHR_ANDROID_SURFACE_EXTENSION_NAME "VK_KHR_android_surface"
typedef VkFlags VkAndroidSurfaceCreateFlagsKHR;
typedef struct VkAndroidSurfaceCreateInfoKHR {
VkStructureType sType;
const void* pNext;
VkAndroidSurfaceCreateFlagsKHR flags;
struct ANativeWindow* window;
} VkAndroidSurfaceCreateInfoKHR;
typedef VkResult (VKAPI_PTR *PFN_vkCreateAndroidSurfaceKHR)(VkInstance instance, const VkAndroidSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateAndroidSurfaceKHR(
VkInstance instance,
const VkAndroidSurfaceCreateInfoKHR* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
#endif
#define VK_ANDROID_external_memory_android_hardware_buffer 1
struct AHardwareBuffer;
#define VK_ANDROID_EXTERNAL_MEMORY_ANDROID_HARDWARE_BUFFER_SPEC_VERSION 5
#define VK_ANDROID_EXTERNAL_MEMORY_ANDROID_HARDWARE_BUFFER_EXTENSION_NAME "VK_ANDROID_external_memory_android_hardware_buffer"
typedef struct VkAndroidHardwareBufferUsageANDROID {
VkStructureType sType;
void* pNext;
uint64_t androidHardwareBufferUsage;
} VkAndroidHardwareBufferUsageANDROID;
typedef struct VkAndroidHardwareBufferPropertiesANDROID {
VkStructureType sType;
void* pNext;
VkDeviceSize allocationSize;
uint32_t memoryTypeBits;
} VkAndroidHardwareBufferPropertiesANDROID;
typedef struct VkAndroidHardwareBufferFormatPropertiesANDROID {
VkStructureType sType;
void* pNext;
VkFormat format;
uint64_t externalFormat;
VkFormatFeatureFlags formatFeatures;
VkComponentMapping samplerYcbcrConversionComponents;
VkSamplerYcbcrModelConversion suggestedYcbcrModel;
VkSamplerYcbcrRange suggestedYcbcrRange;
VkChromaLocation suggestedXChromaOffset;
VkChromaLocation suggestedYChromaOffset;
} VkAndroidHardwareBufferFormatPropertiesANDROID;
typedef struct VkImportAndroidHardwareBufferInfoANDROID {
VkStructureType sType;
const void* pNext;
struct AHardwareBuffer* buffer;
} VkImportAndroidHardwareBufferInfoANDROID;
typedef struct VkMemoryGetAndroidHardwareBufferInfoANDROID {
VkStructureType sType;
const void* pNext;
VkDeviceMemory memory;
} VkMemoryGetAndroidHardwareBufferInfoANDROID;
typedef struct VkExternalFormatANDROID {
VkStructureType sType;
void* pNext;
uint64_t externalFormat;
} VkExternalFormatANDROID;
typedef struct VkAndroidHardwareBufferFormatProperties2ANDROID {
VkStructureType sType;
void* pNext;
VkFormat format;
uint64_t externalFormat;
VkFormatFeatureFlags2 formatFeatures;
VkComponentMapping samplerYcbcrConversionComponents;
VkSamplerYcbcrModelConversion suggestedYcbcrModel;
VkSamplerYcbcrRange suggestedYcbcrRange;
VkChromaLocation suggestedXChromaOffset;
VkChromaLocation suggestedYChromaOffset;
} VkAndroidHardwareBufferFormatProperties2ANDROID;
typedef VkResult (VKAPI_PTR *PFN_vkGetAndroidHardwareBufferPropertiesANDROID)(VkDevice device, const struct AHardwareBuffer* buffer, VkAndroidHardwareBufferPropertiesANDROID* pProperties);
typedef VkResult (VKAPI_PTR *PFN_vkGetMemoryAndroidHardwareBufferANDROID)(VkDevice device, const VkMemoryGetAndroidHardwareBufferInfoANDROID* pInfo, struct AHardwareBuffer** pBuffer);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkGetAndroidHardwareBufferPropertiesANDROID(
VkDevice device,
const struct AHardwareBuffer* buffer,
VkAndroidHardwareBufferPropertiesANDROID* pProperties);
VKAPI_ATTR VkResult VKAPI_CALL vkGetMemoryAndroidHardwareBufferANDROID(
VkDevice device,
const VkMemoryGetAndroidHardwareBufferInfoANDROID* pInfo,
struct AHardwareBuffer** pBuffer);
#endif
#ifdef __cplusplus
}
#endif
#endif

View File

@ -0,0 +1,553 @@
#ifndef VULKAN_BETA_H_
#define VULKAN_BETA_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_KHR_portability_subset 1
#define VK_KHR_PORTABILITY_SUBSET_SPEC_VERSION 1
#define VK_KHR_PORTABILITY_SUBSET_EXTENSION_NAME "VK_KHR_portability_subset"
typedef struct VkPhysicalDevicePortabilitySubsetFeaturesKHR {
VkStructureType sType;
void* pNext;
VkBool32 constantAlphaColorBlendFactors;
VkBool32 events;
VkBool32 imageViewFormatReinterpretation;
VkBool32 imageViewFormatSwizzle;
VkBool32 imageView2DOn3DImage;
VkBool32 multisampleArrayImage;
VkBool32 mutableComparisonSamplers;
VkBool32 pointPolygons;
VkBool32 samplerMipLodBias;
VkBool32 separateStencilMaskRef;
VkBool32 shaderSampleRateInterpolationFunctions;
VkBool32 tessellationIsolines;
VkBool32 tessellationPointMode;
VkBool32 triangleFans;
VkBool32 vertexAttributeAccessBeyondStride;
} VkPhysicalDevicePortabilitySubsetFeaturesKHR;
typedef struct VkPhysicalDevicePortabilitySubsetPropertiesKHR {
VkStructureType sType;
void* pNext;
uint32_t minVertexInputBindingStrideAlignment;
} VkPhysicalDevicePortabilitySubsetPropertiesKHR;
#define VK_KHR_video_encode_queue 1
#define VK_KHR_VIDEO_ENCODE_QUEUE_SPEC_VERSION 7
#define VK_KHR_VIDEO_ENCODE_QUEUE_EXTENSION_NAME "VK_KHR_video_encode_queue"
typedef enum VkVideoEncodeTuningModeKHR {
VK_VIDEO_ENCODE_TUNING_MODE_DEFAULT_KHR = 0,
VK_VIDEO_ENCODE_TUNING_MODE_HIGH_QUALITY_KHR = 1,
VK_VIDEO_ENCODE_TUNING_MODE_LOW_LATENCY_KHR = 2,
VK_VIDEO_ENCODE_TUNING_MODE_ULTRA_LOW_LATENCY_KHR = 3,
VK_VIDEO_ENCODE_TUNING_MODE_LOSSLESS_KHR = 4,
VK_VIDEO_ENCODE_TUNING_MODE_MAX_ENUM_KHR = 0x7FFFFFFF
} VkVideoEncodeTuningModeKHR;
typedef VkFlags VkVideoEncodeFlagsKHR;
typedef enum VkVideoEncodeCapabilityFlagBitsKHR {
VK_VIDEO_ENCODE_CAPABILITY_PRECEDING_EXTERNALLY_ENCODED_BYTES_BIT_KHR = 0x00000001,
VK_VIDEO_ENCODE_CAPABILITY_FLAG_BITS_MAX_ENUM_KHR = 0x7FFFFFFF
} VkVideoEncodeCapabilityFlagBitsKHR;
typedef VkFlags VkVideoEncodeCapabilityFlagsKHR;
typedef enum VkVideoEncodeRateControlModeFlagBitsKHR {
VK_VIDEO_ENCODE_RATE_CONTROL_MODE_NONE_BIT_KHR = 0,
VK_VIDEO_ENCODE_RATE_CONTROL_MODE_CBR_BIT_KHR = 1,
VK_VIDEO_ENCODE_RATE_CONTROL_MODE_VBR_BIT_KHR = 2,
VK_VIDEO_ENCODE_RATE_CONTROL_MODE_FLAG_BITS_MAX_ENUM_KHR = 0x7FFFFFFF
} VkVideoEncodeRateControlModeFlagBitsKHR;
typedef VkFlags VkVideoEncodeRateControlModeFlagsKHR;
typedef enum VkVideoEncodeUsageFlagBitsKHR {
VK_VIDEO_ENCODE_USAGE_DEFAULT_KHR = 0,
VK_VIDEO_ENCODE_USAGE_TRANSCODING_BIT_KHR = 0x00000001,
VK_VIDEO_ENCODE_USAGE_STREAMING_BIT_KHR = 0x00000002,
VK_VIDEO_ENCODE_USAGE_RECORDING_BIT_KHR = 0x00000004,
VK_VIDEO_ENCODE_USAGE_CONFERENCING_BIT_KHR = 0x00000008,
VK_VIDEO_ENCODE_USAGE_FLAG_BITS_MAX_ENUM_KHR = 0x7FFFFFFF
} VkVideoEncodeUsageFlagBitsKHR;
typedef VkFlags VkVideoEncodeUsageFlagsKHR;
typedef enum VkVideoEncodeContentFlagBitsKHR {
VK_VIDEO_ENCODE_CONTENT_DEFAULT_KHR = 0,
VK_VIDEO_ENCODE_CONTENT_CAMERA_BIT_KHR = 0x00000001,
VK_VIDEO_ENCODE_CONTENT_DESKTOP_BIT_KHR = 0x00000002,
VK_VIDEO_ENCODE_CONTENT_RENDERED_BIT_KHR = 0x00000004,
VK_VIDEO_ENCODE_CONTENT_FLAG_BITS_MAX_ENUM_KHR = 0x7FFFFFFF
} VkVideoEncodeContentFlagBitsKHR;
typedef VkFlags VkVideoEncodeContentFlagsKHR;
typedef VkFlags VkVideoEncodeRateControlFlagsKHR;
typedef struct VkVideoEncodeInfoKHR {
VkStructureType sType;
const void* pNext;
VkVideoEncodeFlagsKHR flags;
uint32_t qualityLevel;
VkBuffer dstBitstreamBuffer;
VkDeviceSize dstBitstreamBufferOffset;
VkDeviceSize dstBitstreamBufferMaxRange;
VkVideoPictureResourceInfoKHR srcPictureResource;
const VkVideoReferenceSlotInfoKHR* pSetupReferenceSlot;
uint32_t referenceSlotCount;
const VkVideoReferenceSlotInfoKHR* pReferenceSlots;
uint32_t precedingExternallyEncodedBytes;
} VkVideoEncodeInfoKHR;
typedef struct VkVideoEncodeCapabilitiesKHR {
VkStructureType sType;
void* pNext;
VkVideoEncodeCapabilityFlagsKHR flags;
VkVideoEncodeRateControlModeFlagsKHR rateControlModes;
uint8_t rateControlLayerCount;
uint8_t qualityLevelCount;
VkExtent2D inputImageDataFillAlignment;
} VkVideoEncodeCapabilitiesKHR;
typedef struct VkVideoEncodeUsageInfoKHR {
VkStructureType sType;
const void* pNext;
VkVideoEncodeUsageFlagsKHR videoUsageHints;
VkVideoEncodeContentFlagsKHR videoContentHints;
VkVideoEncodeTuningModeKHR tuningMode;
} VkVideoEncodeUsageInfoKHR;
typedef struct VkVideoEncodeRateControlLayerInfoKHR {
VkStructureType sType;
const void* pNext;
uint32_t averageBitrate;
uint32_t maxBitrate;
uint32_t frameRateNumerator;
uint32_t frameRateDenominator;
uint32_t virtualBufferSizeInMs;
uint32_t initialVirtualBufferSizeInMs;
} VkVideoEncodeRateControlLayerInfoKHR;
typedef struct VkVideoEncodeRateControlInfoKHR {
VkStructureType sType;
const void* pNext;
VkVideoEncodeRateControlFlagsKHR flags;
VkVideoEncodeRateControlModeFlagBitsKHR rateControlMode;
uint8_t layerCount;
const VkVideoEncodeRateControlLayerInfoKHR* pLayerConfigs;
} VkVideoEncodeRateControlInfoKHR;
typedef void (VKAPI_PTR *PFN_vkCmdEncodeVideoKHR)(VkCommandBuffer commandBuffer, const VkVideoEncodeInfoKHR* pEncodeInfo);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR void VKAPI_CALL vkCmdEncodeVideoKHR(
VkCommandBuffer commandBuffer,
const VkVideoEncodeInfoKHR* pEncodeInfo);
#endif
#define VK_EXT_video_encode_h264 1
#include "vk_video/vulkan_video_codec_h264std.h"
#include "vk_video/vulkan_video_codec_h264std_encode.h"
#define VK_EXT_VIDEO_ENCODE_H264_SPEC_VERSION 9
#define VK_EXT_VIDEO_ENCODE_H264_EXTENSION_NAME "VK_EXT_video_encode_h264"
typedef enum VkVideoEncodeH264RateControlStructureEXT {
VK_VIDEO_ENCODE_H264_RATE_CONTROL_STRUCTURE_UNKNOWN_EXT = 0,
VK_VIDEO_ENCODE_H264_RATE_CONTROL_STRUCTURE_FLAT_EXT = 1,
VK_VIDEO_ENCODE_H264_RATE_CONTROL_STRUCTURE_DYADIC_EXT = 2,
VK_VIDEO_ENCODE_H264_RATE_CONTROL_STRUCTURE_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH264RateControlStructureEXT;
typedef enum VkVideoEncodeH264CapabilityFlagBitsEXT {
VK_VIDEO_ENCODE_H264_CAPABILITY_DIRECT_8X8_INFERENCE_ENABLED_BIT_EXT = 0x00000001,
VK_VIDEO_ENCODE_H264_CAPABILITY_DIRECT_8X8_INFERENCE_DISABLED_BIT_EXT = 0x00000002,
VK_VIDEO_ENCODE_H264_CAPABILITY_SEPARATE_COLOUR_PLANE_BIT_EXT = 0x00000004,
VK_VIDEO_ENCODE_H264_CAPABILITY_QPPRIME_Y_ZERO_TRANSFORM_BYPASS_BIT_EXT = 0x00000008,
VK_VIDEO_ENCODE_H264_CAPABILITY_SCALING_LISTS_BIT_EXT = 0x00000010,
VK_VIDEO_ENCODE_H264_CAPABILITY_HRD_COMPLIANCE_BIT_EXT = 0x00000020,
VK_VIDEO_ENCODE_H264_CAPABILITY_CHROMA_QP_OFFSET_BIT_EXT = 0x00000040,
VK_VIDEO_ENCODE_H264_CAPABILITY_SECOND_CHROMA_QP_OFFSET_BIT_EXT = 0x00000080,
VK_VIDEO_ENCODE_H264_CAPABILITY_PIC_INIT_QP_MINUS26_BIT_EXT = 0x00000100,
VK_VIDEO_ENCODE_H264_CAPABILITY_WEIGHTED_PRED_BIT_EXT = 0x00000200,
VK_VIDEO_ENCODE_H264_CAPABILITY_WEIGHTED_BIPRED_EXPLICIT_BIT_EXT = 0x00000400,
VK_VIDEO_ENCODE_H264_CAPABILITY_WEIGHTED_BIPRED_IMPLICIT_BIT_EXT = 0x00000800,
VK_VIDEO_ENCODE_H264_CAPABILITY_WEIGHTED_PRED_NO_TABLE_BIT_EXT = 0x00001000,
VK_VIDEO_ENCODE_H264_CAPABILITY_TRANSFORM_8X8_BIT_EXT = 0x00002000,
VK_VIDEO_ENCODE_H264_CAPABILITY_CABAC_BIT_EXT = 0x00004000,
VK_VIDEO_ENCODE_H264_CAPABILITY_CAVLC_BIT_EXT = 0x00008000,
VK_VIDEO_ENCODE_H264_CAPABILITY_DEBLOCKING_FILTER_DISABLED_BIT_EXT = 0x00010000,
VK_VIDEO_ENCODE_H264_CAPABILITY_DEBLOCKING_FILTER_ENABLED_BIT_EXT = 0x00020000,
VK_VIDEO_ENCODE_H264_CAPABILITY_DEBLOCKING_FILTER_PARTIAL_BIT_EXT = 0x00040000,
VK_VIDEO_ENCODE_H264_CAPABILITY_DISABLE_DIRECT_SPATIAL_MV_PRED_BIT_EXT = 0x00080000,
VK_VIDEO_ENCODE_H264_CAPABILITY_MULTIPLE_SLICE_PER_FRAME_BIT_EXT = 0x00100000,
VK_VIDEO_ENCODE_H264_CAPABILITY_SLICE_MB_COUNT_BIT_EXT = 0x00200000,
VK_VIDEO_ENCODE_H264_CAPABILITY_ROW_UNALIGNED_SLICE_BIT_EXT = 0x00400000,
VK_VIDEO_ENCODE_H264_CAPABILITY_DIFFERENT_SLICE_TYPE_BIT_EXT = 0x00800000,
VK_VIDEO_ENCODE_H264_CAPABILITY_B_FRAME_IN_L1_LIST_BIT_EXT = 0x01000000,
VK_VIDEO_ENCODE_H264_CAPABILITY_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH264CapabilityFlagBitsEXT;
typedef VkFlags VkVideoEncodeH264CapabilityFlagsEXT;
typedef enum VkVideoEncodeH264InputModeFlagBitsEXT {
VK_VIDEO_ENCODE_H264_INPUT_MODE_FRAME_BIT_EXT = 0x00000001,
VK_VIDEO_ENCODE_H264_INPUT_MODE_SLICE_BIT_EXT = 0x00000002,
VK_VIDEO_ENCODE_H264_INPUT_MODE_NON_VCL_BIT_EXT = 0x00000004,
VK_VIDEO_ENCODE_H264_INPUT_MODE_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH264InputModeFlagBitsEXT;
typedef VkFlags VkVideoEncodeH264InputModeFlagsEXT;
typedef enum VkVideoEncodeH264OutputModeFlagBitsEXT {
VK_VIDEO_ENCODE_H264_OUTPUT_MODE_FRAME_BIT_EXT = 0x00000001,
VK_VIDEO_ENCODE_H264_OUTPUT_MODE_SLICE_BIT_EXT = 0x00000002,
VK_VIDEO_ENCODE_H264_OUTPUT_MODE_NON_VCL_BIT_EXT = 0x00000004,
VK_VIDEO_ENCODE_H264_OUTPUT_MODE_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH264OutputModeFlagBitsEXT;
typedef VkFlags VkVideoEncodeH264OutputModeFlagsEXT;
typedef struct VkVideoEncodeH264CapabilitiesEXT {
VkStructureType sType;
void* pNext;
VkVideoEncodeH264CapabilityFlagsEXT flags;
VkVideoEncodeH264InputModeFlagsEXT inputModeFlags;
VkVideoEncodeH264OutputModeFlagsEXT outputModeFlags;
uint8_t maxPPictureL0ReferenceCount;
uint8_t maxBPictureL0ReferenceCount;
uint8_t maxL1ReferenceCount;
VkBool32 motionVectorsOverPicBoundariesFlag;
uint32_t maxBytesPerPicDenom;
uint32_t maxBitsPerMbDenom;
uint32_t log2MaxMvLengthHorizontal;
uint32_t log2MaxMvLengthVertical;
} VkVideoEncodeH264CapabilitiesEXT;
typedef struct VkVideoEncodeH264SessionParametersAddInfoEXT {
VkStructureType sType;
const void* pNext;
uint32_t stdSPSCount;
const StdVideoH264SequenceParameterSet* pStdSPSs;
uint32_t stdPPSCount;
const StdVideoH264PictureParameterSet* pStdPPSs;
} VkVideoEncodeH264SessionParametersAddInfoEXT;
typedef struct VkVideoEncodeH264SessionParametersCreateInfoEXT {
VkStructureType sType;
const void* pNext;
uint32_t maxStdSPSCount;
uint32_t maxStdPPSCount;
const VkVideoEncodeH264SessionParametersAddInfoEXT* pParametersAddInfo;
} VkVideoEncodeH264SessionParametersCreateInfoEXT;
typedef struct VkVideoEncodeH264DpbSlotInfoEXT {
VkStructureType sType;
const void* pNext;
int8_t slotIndex;
const StdVideoEncodeH264ReferenceInfo* pStdReferenceInfo;
} VkVideoEncodeH264DpbSlotInfoEXT;
typedef struct VkVideoEncodeH264ReferenceListsInfoEXT {
VkStructureType sType;
const void* pNext;
uint8_t referenceList0EntryCount;
const VkVideoEncodeH264DpbSlotInfoEXT* pReferenceList0Entries;
uint8_t referenceList1EntryCount;
const VkVideoEncodeH264DpbSlotInfoEXT* pReferenceList1Entries;
const StdVideoEncodeH264RefMemMgmtCtrlOperations* pMemMgmtCtrlOperations;
} VkVideoEncodeH264ReferenceListsInfoEXT;
typedef struct VkVideoEncodeH264NaluSliceInfoEXT {
VkStructureType sType;
const void* pNext;
uint32_t mbCount;
const VkVideoEncodeH264ReferenceListsInfoEXT* pReferenceFinalLists;
const StdVideoEncodeH264SliceHeader* pSliceHeaderStd;
} VkVideoEncodeH264NaluSliceInfoEXT;
typedef struct VkVideoEncodeH264VclFrameInfoEXT {
VkStructureType sType;
const void* pNext;
const VkVideoEncodeH264ReferenceListsInfoEXT* pReferenceFinalLists;
uint32_t naluSliceEntryCount;
const VkVideoEncodeH264NaluSliceInfoEXT* pNaluSliceEntries;
const StdVideoEncodeH264PictureInfo* pCurrentPictureInfo;
} VkVideoEncodeH264VclFrameInfoEXT;
typedef struct VkVideoEncodeH264EmitPictureParametersInfoEXT {
VkStructureType sType;
const void* pNext;
uint8_t spsId;
VkBool32 emitSpsEnable;
uint32_t ppsIdEntryCount;
const uint8_t* ppsIdEntries;
} VkVideoEncodeH264EmitPictureParametersInfoEXT;
typedef struct VkVideoEncodeH264ProfileInfoEXT {
VkStructureType sType;
const void* pNext;
StdVideoH264ProfileIdc stdProfileIdc;
} VkVideoEncodeH264ProfileInfoEXT;
typedef struct VkVideoEncodeH264RateControlInfoEXT {
VkStructureType sType;
const void* pNext;
uint32_t gopFrameCount;
uint32_t idrPeriod;
uint32_t consecutiveBFrameCount;
VkVideoEncodeH264RateControlStructureEXT rateControlStructure;
uint8_t temporalLayerCount;
} VkVideoEncodeH264RateControlInfoEXT;
typedef struct VkVideoEncodeH264QpEXT {
int32_t qpI;
int32_t qpP;
int32_t qpB;
} VkVideoEncodeH264QpEXT;
typedef struct VkVideoEncodeH264FrameSizeEXT {
uint32_t frameISize;
uint32_t framePSize;
uint32_t frameBSize;
} VkVideoEncodeH264FrameSizeEXT;
typedef struct VkVideoEncodeH264RateControlLayerInfoEXT {
VkStructureType sType;
const void* pNext;
uint8_t temporalLayerId;
VkBool32 useInitialRcQp;
VkVideoEncodeH264QpEXT initialRcQp;
VkBool32 useMinQp;
VkVideoEncodeH264QpEXT minQp;
VkBool32 useMaxQp;
VkVideoEncodeH264QpEXT maxQp;
VkBool32 useMaxFrameSize;
VkVideoEncodeH264FrameSizeEXT maxFrameSize;
} VkVideoEncodeH264RateControlLayerInfoEXT;
#define VK_EXT_video_encode_h265 1
#include "vk_video/vulkan_video_codec_h265std.h"
#include "vk_video/vulkan_video_codec_h265std_encode.h"
#define VK_EXT_VIDEO_ENCODE_H265_SPEC_VERSION 9
#define VK_EXT_VIDEO_ENCODE_H265_EXTENSION_NAME "VK_EXT_video_encode_h265"
typedef enum VkVideoEncodeH265RateControlStructureEXT {
VK_VIDEO_ENCODE_H265_RATE_CONTROL_STRUCTURE_UNKNOWN_EXT = 0,
VK_VIDEO_ENCODE_H265_RATE_CONTROL_STRUCTURE_FLAT_EXT = 1,
VK_VIDEO_ENCODE_H265_RATE_CONTROL_STRUCTURE_DYADIC_EXT = 2,
VK_VIDEO_ENCODE_H265_RATE_CONTROL_STRUCTURE_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH265RateControlStructureEXT;
typedef enum VkVideoEncodeH265CapabilityFlagBitsEXT {
VK_VIDEO_ENCODE_H265_CAPABILITY_SEPARATE_COLOUR_PLANE_BIT_EXT = 0x00000001,
VK_VIDEO_ENCODE_H265_CAPABILITY_SCALING_LISTS_BIT_EXT = 0x00000002,
VK_VIDEO_ENCODE_H265_CAPABILITY_SAMPLE_ADAPTIVE_OFFSET_ENABLED_BIT_EXT = 0x00000004,
VK_VIDEO_ENCODE_H265_CAPABILITY_PCM_ENABLE_BIT_EXT = 0x00000008,
VK_VIDEO_ENCODE_H265_CAPABILITY_SPS_TEMPORAL_MVP_ENABLED_BIT_EXT = 0x00000010,
VK_VIDEO_ENCODE_H265_CAPABILITY_HRD_COMPLIANCE_BIT_EXT = 0x00000020,
VK_VIDEO_ENCODE_H265_CAPABILITY_INIT_QP_MINUS26_BIT_EXT = 0x00000040,
VK_VIDEO_ENCODE_H265_CAPABILITY_LOG2_PARALLEL_MERGE_LEVEL_MINUS2_BIT_EXT = 0x00000080,
VK_VIDEO_ENCODE_H265_CAPABILITY_SIGN_DATA_HIDING_ENABLED_BIT_EXT = 0x00000100,
VK_VIDEO_ENCODE_H265_CAPABILITY_TRANSFORM_SKIP_ENABLED_BIT_EXT = 0x00000200,
VK_VIDEO_ENCODE_H265_CAPABILITY_TRANSFORM_SKIP_DISABLED_BIT_EXT = 0x00000400,
VK_VIDEO_ENCODE_H265_CAPABILITY_PPS_SLICE_CHROMA_QP_OFFSETS_PRESENT_BIT_EXT = 0x00000800,
VK_VIDEO_ENCODE_H265_CAPABILITY_WEIGHTED_PRED_BIT_EXT = 0x00001000,
VK_VIDEO_ENCODE_H265_CAPABILITY_WEIGHTED_BIPRED_BIT_EXT = 0x00002000,
VK_VIDEO_ENCODE_H265_CAPABILITY_WEIGHTED_PRED_NO_TABLE_BIT_EXT = 0x00004000,
VK_VIDEO_ENCODE_H265_CAPABILITY_TRANSQUANT_BYPASS_ENABLED_BIT_EXT = 0x00008000,
VK_VIDEO_ENCODE_H265_CAPABILITY_ENTROPY_CODING_SYNC_ENABLED_BIT_EXT = 0x00010000,
VK_VIDEO_ENCODE_H265_CAPABILITY_DEBLOCKING_FILTER_OVERRIDE_ENABLED_BIT_EXT = 0x00020000,
VK_VIDEO_ENCODE_H265_CAPABILITY_MULTIPLE_TILE_PER_FRAME_BIT_EXT = 0x00040000,
VK_VIDEO_ENCODE_H265_CAPABILITY_MULTIPLE_SLICE_PER_TILE_BIT_EXT = 0x00080000,
VK_VIDEO_ENCODE_H265_CAPABILITY_MULTIPLE_TILE_PER_SLICE_BIT_EXT = 0x00100000,
VK_VIDEO_ENCODE_H265_CAPABILITY_SLICE_SEGMENT_CTB_COUNT_BIT_EXT = 0x00200000,
VK_VIDEO_ENCODE_H265_CAPABILITY_ROW_UNALIGNED_SLICE_SEGMENT_BIT_EXT = 0x00400000,
VK_VIDEO_ENCODE_H265_CAPABILITY_DEPENDENT_SLICE_SEGMENT_BIT_EXT = 0x00800000,
VK_VIDEO_ENCODE_H265_CAPABILITY_DIFFERENT_SLICE_TYPE_BIT_EXT = 0x01000000,
VK_VIDEO_ENCODE_H265_CAPABILITY_B_FRAME_IN_L1_LIST_BIT_EXT = 0x02000000,
VK_VIDEO_ENCODE_H265_CAPABILITY_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH265CapabilityFlagBitsEXT;
typedef VkFlags VkVideoEncodeH265CapabilityFlagsEXT;
typedef enum VkVideoEncodeH265InputModeFlagBitsEXT {
VK_VIDEO_ENCODE_H265_INPUT_MODE_FRAME_BIT_EXT = 0x00000001,
VK_VIDEO_ENCODE_H265_INPUT_MODE_SLICE_SEGMENT_BIT_EXT = 0x00000002,
VK_VIDEO_ENCODE_H265_INPUT_MODE_NON_VCL_BIT_EXT = 0x00000004,
VK_VIDEO_ENCODE_H265_INPUT_MODE_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH265InputModeFlagBitsEXT;
typedef VkFlags VkVideoEncodeH265InputModeFlagsEXT;
typedef enum VkVideoEncodeH265OutputModeFlagBitsEXT {
VK_VIDEO_ENCODE_H265_OUTPUT_MODE_FRAME_BIT_EXT = 0x00000001,
VK_VIDEO_ENCODE_H265_OUTPUT_MODE_SLICE_SEGMENT_BIT_EXT = 0x00000002,
VK_VIDEO_ENCODE_H265_OUTPUT_MODE_NON_VCL_BIT_EXT = 0x00000004,
VK_VIDEO_ENCODE_H265_OUTPUT_MODE_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH265OutputModeFlagBitsEXT;
typedef VkFlags VkVideoEncodeH265OutputModeFlagsEXT;
typedef enum VkVideoEncodeH265CtbSizeFlagBitsEXT {
VK_VIDEO_ENCODE_H265_CTB_SIZE_16_BIT_EXT = 0x00000001,
VK_VIDEO_ENCODE_H265_CTB_SIZE_32_BIT_EXT = 0x00000002,
VK_VIDEO_ENCODE_H265_CTB_SIZE_64_BIT_EXT = 0x00000004,
VK_VIDEO_ENCODE_H265_CTB_SIZE_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH265CtbSizeFlagBitsEXT;
typedef VkFlags VkVideoEncodeH265CtbSizeFlagsEXT;
typedef enum VkVideoEncodeH265TransformBlockSizeFlagBitsEXT {
VK_VIDEO_ENCODE_H265_TRANSFORM_BLOCK_SIZE_4_BIT_EXT = 0x00000001,
VK_VIDEO_ENCODE_H265_TRANSFORM_BLOCK_SIZE_8_BIT_EXT = 0x00000002,
VK_VIDEO_ENCODE_H265_TRANSFORM_BLOCK_SIZE_16_BIT_EXT = 0x00000004,
VK_VIDEO_ENCODE_H265_TRANSFORM_BLOCK_SIZE_32_BIT_EXT = 0x00000008,
VK_VIDEO_ENCODE_H265_TRANSFORM_BLOCK_SIZE_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF
} VkVideoEncodeH265TransformBlockSizeFlagBitsEXT;
typedef VkFlags VkVideoEncodeH265TransformBlockSizeFlagsEXT;
typedef struct VkVideoEncodeH265CapabilitiesEXT {
VkStructureType sType;
void* pNext;
VkVideoEncodeH265CapabilityFlagsEXT flags;
VkVideoEncodeH265InputModeFlagsEXT inputModeFlags;
VkVideoEncodeH265OutputModeFlagsEXT outputModeFlags;
VkVideoEncodeH265CtbSizeFlagsEXT ctbSizes;
VkVideoEncodeH265TransformBlockSizeFlagsEXT transformBlockSizes;
uint8_t maxPPictureL0ReferenceCount;
uint8_t maxBPictureL0ReferenceCount;
uint8_t maxL1ReferenceCount;
uint8_t maxSubLayersCount;
uint8_t minLog2MinLumaCodingBlockSizeMinus3;
uint8_t maxLog2MinLumaCodingBlockSizeMinus3;
uint8_t minLog2MinLumaTransformBlockSizeMinus2;
uint8_t maxLog2MinLumaTransformBlockSizeMinus2;
uint8_t minMaxTransformHierarchyDepthInter;
uint8_t maxMaxTransformHierarchyDepthInter;
uint8_t minMaxTransformHierarchyDepthIntra;
uint8_t maxMaxTransformHierarchyDepthIntra;
uint8_t maxDiffCuQpDeltaDepth;
uint8_t minMaxNumMergeCand;
uint8_t maxMaxNumMergeCand;
} VkVideoEncodeH265CapabilitiesEXT;
typedef struct VkVideoEncodeH265SessionParametersAddInfoEXT {
VkStructureType sType;
const void* pNext;
uint32_t stdVPSCount;
const StdVideoH265VideoParameterSet* pStdVPSs;
uint32_t stdSPSCount;
const StdVideoH265SequenceParameterSet* pStdSPSs;
uint32_t stdPPSCount;
const StdVideoH265PictureParameterSet* pStdPPSs;
} VkVideoEncodeH265SessionParametersAddInfoEXT;
typedef struct VkVideoEncodeH265SessionParametersCreateInfoEXT {
VkStructureType sType;
const void* pNext;
uint32_t maxStdVPSCount;
uint32_t maxStdSPSCount;
uint32_t maxStdPPSCount;
const VkVideoEncodeH265SessionParametersAddInfoEXT* pParametersAddInfo;
} VkVideoEncodeH265SessionParametersCreateInfoEXT;
typedef struct VkVideoEncodeH265DpbSlotInfoEXT {
VkStructureType sType;
const void* pNext;
int8_t slotIndex;
const StdVideoEncodeH265ReferenceInfo* pStdReferenceInfo;
} VkVideoEncodeH265DpbSlotInfoEXT;
typedef struct VkVideoEncodeH265ReferenceListsInfoEXT {
VkStructureType sType;
const void* pNext;
uint8_t referenceList0EntryCount;
const VkVideoEncodeH265DpbSlotInfoEXT* pReferenceList0Entries;
uint8_t referenceList1EntryCount;
const VkVideoEncodeH265DpbSlotInfoEXT* pReferenceList1Entries;
const StdVideoEncodeH265ReferenceModifications* pReferenceModifications;
} VkVideoEncodeH265ReferenceListsInfoEXT;
typedef struct VkVideoEncodeH265NaluSliceSegmentInfoEXT {
VkStructureType sType;
const void* pNext;
uint32_t ctbCount;
const VkVideoEncodeH265ReferenceListsInfoEXT* pReferenceFinalLists;
const StdVideoEncodeH265SliceSegmentHeader* pSliceSegmentHeaderStd;
} VkVideoEncodeH265NaluSliceSegmentInfoEXT;
typedef struct VkVideoEncodeH265VclFrameInfoEXT {
VkStructureType sType;
const void* pNext;
const VkVideoEncodeH265ReferenceListsInfoEXT* pReferenceFinalLists;
uint32_t naluSliceSegmentEntryCount;
const VkVideoEncodeH265NaluSliceSegmentInfoEXT* pNaluSliceSegmentEntries;
const StdVideoEncodeH265PictureInfo* pCurrentPictureInfo;
} VkVideoEncodeH265VclFrameInfoEXT;
typedef struct VkVideoEncodeH265EmitPictureParametersInfoEXT {
VkStructureType sType;
const void* pNext;
uint8_t vpsId;
uint8_t spsId;
VkBool32 emitVpsEnable;
VkBool32 emitSpsEnable;
uint32_t ppsIdEntryCount;
const uint8_t* ppsIdEntries;
} VkVideoEncodeH265EmitPictureParametersInfoEXT;
typedef struct VkVideoEncodeH265ProfileInfoEXT {
VkStructureType sType;
const void* pNext;
StdVideoH265ProfileIdc stdProfileIdc;
} VkVideoEncodeH265ProfileInfoEXT;
typedef struct VkVideoEncodeH265RateControlInfoEXT {
VkStructureType sType;
const void* pNext;
uint32_t gopFrameCount;
uint32_t idrPeriod;
uint32_t consecutiveBFrameCount;
VkVideoEncodeH265RateControlStructureEXT rateControlStructure;
uint8_t subLayerCount;
} VkVideoEncodeH265RateControlInfoEXT;
typedef struct VkVideoEncodeH265QpEXT {
int32_t qpI;
int32_t qpP;
int32_t qpB;
} VkVideoEncodeH265QpEXT;
typedef struct VkVideoEncodeH265FrameSizeEXT {
uint32_t frameISize;
uint32_t framePSize;
uint32_t frameBSize;
} VkVideoEncodeH265FrameSizeEXT;
typedef struct VkVideoEncodeH265RateControlLayerInfoEXT {
VkStructureType sType;
const void* pNext;
uint8_t temporalId;
VkBool32 useInitialRcQp;
VkVideoEncodeH265QpEXT initialRcQp;
VkBool32 useMinQp;
VkVideoEncodeH265QpEXT minQp;
VkBool32 useMaxQp;
VkVideoEncodeH265QpEXT maxQp;
VkBool32 useMaxFrameSize;
VkVideoEncodeH265FrameSizeEXT maxFrameSize;
} VkVideoEncodeH265RateControlLayerInfoEXT;
#ifdef __cplusplus
}
#endif
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,54 @@
#ifndef VULKAN_DIRECTFB_H_
#define VULKAN_DIRECTFB_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_EXT_directfb_surface 1
#define VK_EXT_DIRECTFB_SURFACE_SPEC_VERSION 1
#define VK_EXT_DIRECTFB_SURFACE_EXTENSION_NAME "VK_EXT_directfb_surface"
typedef VkFlags VkDirectFBSurfaceCreateFlagsEXT;
typedef struct VkDirectFBSurfaceCreateInfoEXT {
VkStructureType sType;
const void* pNext;
VkDirectFBSurfaceCreateFlagsEXT flags;
IDirectFB* dfb;
IDirectFBSurface* surface;
} VkDirectFBSurfaceCreateInfoEXT;
typedef VkResult (VKAPI_PTR *PFN_vkCreateDirectFBSurfaceEXT)(VkInstance instance, const VkDirectFBSurfaceCreateInfoEXT* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceDirectFBPresentationSupportEXT)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, IDirectFB* dfb);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateDirectFBSurfaceEXT(
VkInstance instance,
const VkDirectFBSurfaceCreateInfoEXT* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceDirectFBPresentationSupportEXT(
VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
IDirectFB* dfb);
#endif
#ifdef __cplusplus
}
#endif
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,258 @@
#ifndef VULKAN_FUCHSIA_H_
#define VULKAN_FUCHSIA_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_FUCHSIA_imagepipe_surface 1
#define VK_FUCHSIA_IMAGEPIPE_SURFACE_SPEC_VERSION 1
#define VK_FUCHSIA_IMAGEPIPE_SURFACE_EXTENSION_NAME "VK_FUCHSIA_imagepipe_surface"
typedef VkFlags VkImagePipeSurfaceCreateFlagsFUCHSIA;
typedef struct VkImagePipeSurfaceCreateInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
VkImagePipeSurfaceCreateFlagsFUCHSIA flags;
zx_handle_t imagePipeHandle;
} VkImagePipeSurfaceCreateInfoFUCHSIA;
typedef VkResult (VKAPI_PTR *PFN_vkCreateImagePipeSurfaceFUCHSIA)(VkInstance instance, const VkImagePipeSurfaceCreateInfoFUCHSIA* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateImagePipeSurfaceFUCHSIA(
VkInstance instance,
const VkImagePipeSurfaceCreateInfoFUCHSIA* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
#endif
#define VK_FUCHSIA_external_memory 1
#define VK_FUCHSIA_EXTERNAL_MEMORY_SPEC_VERSION 1
#define VK_FUCHSIA_EXTERNAL_MEMORY_EXTENSION_NAME "VK_FUCHSIA_external_memory"
typedef struct VkImportMemoryZirconHandleInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
VkExternalMemoryHandleTypeFlagBits handleType;
zx_handle_t handle;
} VkImportMemoryZirconHandleInfoFUCHSIA;
typedef struct VkMemoryZirconHandlePropertiesFUCHSIA {
VkStructureType sType;
void* pNext;
uint32_t memoryTypeBits;
} VkMemoryZirconHandlePropertiesFUCHSIA;
typedef struct VkMemoryGetZirconHandleInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
VkDeviceMemory memory;
VkExternalMemoryHandleTypeFlagBits handleType;
} VkMemoryGetZirconHandleInfoFUCHSIA;
typedef VkResult (VKAPI_PTR *PFN_vkGetMemoryZirconHandleFUCHSIA)(VkDevice device, const VkMemoryGetZirconHandleInfoFUCHSIA* pGetZirconHandleInfo, zx_handle_t* pZirconHandle);
typedef VkResult (VKAPI_PTR *PFN_vkGetMemoryZirconHandlePropertiesFUCHSIA)(VkDevice device, VkExternalMemoryHandleTypeFlagBits handleType, zx_handle_t zirconHandle, VkMemoryZirconHandlePropertiesFUCHSIA* pMemoryZirconHandleProperties);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkGetMemoryZirconHandleFUCHSIA(
VkDevice device,
const VkMemoryGetZirconHandleInfoFUCHSIA* pGetZirconHandleInfo,
zx_handle_t* pZirconHandle);
VKAPI_ATTR VkResult VKAPI_CALL vkGetMemoryZirconHandlePropertiesFUCHSIA(
VkDevice device,
VkExternalMemoryHandleTypeFlagBits handleType,
zx_handle_t zirconHandle,
VkMemoryZirconHandlePropertiesFUCHSIA* pMemoryZirconHandleProperties);
#endif
#define VK_FUCHSIA_external_semaphore 1
#define VK_FUCHSIA_EXTERNAL_SEMAPHORE_SPEC_VERSION 1
#define VK_FUCHSIA_EXTERNAL_SEMAPHORE_EXTENSION_NAME "VK_FUCHSIA_external_semaphore"
typedef struct VkImportSemaphoreZirconHandleInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
VkSemaphore semaphore;
VkSemaphoreImportFlags flags;
VkExternalSemaphoreHandleTypeFlagBits handleType;
zx_handle_t zirconHandle;
} VkImportSemaphoreZirconHandleInfoFUCHSIA;
typedef struct VkSemaphoreGetZirconHandleInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
VkSemaphore semaphore;
VkExternalSemaphoreHandleTypeFlagBits handleType;
} VkSemaphoreGetZirconHandleInfoFUCHSIA;
typedef VkResult (VKAPI_PTR *PFN_vkImportSemaphoreZirconHandleFUCHSIA)(VkDevice device, const VkImportSemaphoreZirconHandleInfoFUCHSIA* pImportSemaphoreZirconHandleInfo);
typedef VkResult (VKAPI_PTR *PFN_vkGetSemaphoreZirconHandleFUCHSIA)(VkDevice device, const VkSemaphoreGetZirconHandleInfoFUCHSIA* pGetZirconHandleInfo, zx_handle_t* pZirconHandle);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkImportSemaphoreZirconHandleFUCHSIA(
VkDevice device,
const VkImportSemaphoreZirconHandleInfoFUCHSIA* pImportSemaphoreZirconHandleInfo);
VKAPI_ATTR VkResult VKAPI_CALL vkGetSemaphoreZirconHandleFUCHSIA(
VkDevice device,
const VkSemaphoreGetZirconHandleInfoFUCHSIA* pGetZirconHandleInfo,
zx_handle_t* pZirconHandle);
#endif
#define VK_FUCHSIA_buffer_collection 1
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkBufferCollectionFUCHSIA)
#define VK_FUCHSIA_BUFFER_COLLECTION_SPEC_VERSION 2
#define VK_FUCHSIA_BUFFER_COLLECTION_EXTENSION_NAME "VK_FUCHSIA_buffer_collection"
typedef VkFlags VkImageFormatConstraintsFlagsFUCHSIA;
typedef enum VkImageConstraintsInfoFlagBitsFUCHSIA {
VK_IMAGE_CONSTRAINTS_INFO_CPU_READ_RARELY_FUCHSIA = 0x00000001,
VK_IMAGE_CONSTRAINTS_INFO_CPU_READ_OFTEN_FUCHSIA = 0x00000002,
VK_IMAGE_CONSTRAINTS_INFO_CPU_WRITE_RARELY_FUCHSIA = 0x00000004,
VK_IMAGE_CONSTRAINTS_INFO_CPU_WRITE_OFTEN_FUCHSIA = 0x00000008,
VK_IMAGE_CONSTRAINTS_INFO_PROTECTED_OPTIONAL_FUCHSIA = 0x00000010,
VK_IMAGE_CONSTRAINTS_INFO_FLAG_BITS_MAX_ENUM_FUCHSIA = 0x7FFFFFFF
} VkImageConstraintsInfoFlagBitsFUCHSIA;
typedef VkFlags VkImageConstraintsInfoFlagsFUCHSIA;
typedef struct VkBufferCollectionCreateInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
zx_handle_t collectionToken;
} VkBufferCollectionCreateInfoFUCHSIA;
typedef struct VkImportMemoryBufferCollectionFUCHSIA {
VkStructureType sType;
const void* pNext;
VkBufferCollectionFUCHSIA collection;
uint32_t index;
} VkImportMemoryBufferCollectionFUCHSIA;
typedef struct VkBufferCollectionImageCreateInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
VkBufferCollectionFUCHSIA collection;
uint32_t index;
} VkBufferCollectionImageCreateInfoFUCHSIA;
typedef struct VkBufferCollectionConstraintsInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
uint32_t minBufferCount;
uint32_t maxBufferCount;
uint32_t minBufferCountForCamping;
uint32_t minBufferCountForDedicatedSlack;
uint32_t minBufferCountForSharedSlack;
} VkBufferCollectionConstraintsInfoFUCHSIA;
typedef struct VkBufferConstraintsInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
VkBufferCreateInfo createInfo;
VkFormatFeatureFlags requiredFormatFeatures;
VkBufferCollectionConstraintsInfoFUCHSIA bufferCollectionConstraints;
} VkBufferConstraintsInfoFUCHSIA;
typedef struct VkBufferCollectionBufferCreateInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
VkBufferCollectionFUCHSIA collection;
uint32_t index;
} VkBufferCollectionBufferCreateInfoFUCHSIA;
typedef struct VkSysmemColorSpaceFUCHSIA {
VkStructureType sType;
const void* pNext;
uint32_t colorSpace;
} VkSysmemColorSpaceFUCHSIA;
typedef struct VkBufferCollectionPropertiesFUCHSIA {
VkStructureType sType;
void* pNext;
uint32_t memoryTypeBits;
uint32_t bufferCount;
uint32_t createInfoIndex;
uint64_t sysmemPixelFormat;
VkFormatFeatureFlags formatFeatures;
VkSysmemColorSpaceFUCHSIA sysmemColorSpaceIndex;
VkComponentMapping samplerYcbcrConversionComponents;
VkSamplerYcbcrModelConversion suggestedYcbcrModel;
VkSamplerYcbcrRange suggestedYcbcrRange;
VkChromaLocation suggestedXChromaOffset;
VkChromaLocation suggestedYChromaOffset;
} VkBufferCollectionPropertiesFUCHSIA;
typedef struct VkImageFormatConstraintsInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
VkImageCreateInfo imageCreateInfo;
VkFormatFeatureFlags requiredFormatFeatures;
VkImageFormatConstraintsFlagsFUCHSIA flags;
uint64_t sysmemPixelFormat;
uint32_t colorSpaceCount;
const VkSysmemColorSpaceFUCHSIA* pColorSpaces;
} VkImageFormatConstraintsInfoFUCHSIA;
typedef struct VkImageConstraintsInfoFUCHSIA {
VkStructureType sType;
const void* pNext;
uint32_t formatConstraintsCount;
const VkImageFormatConstraintsInfoFUCHSIA* pFormatConstraints;
VkBufferCollectionConstraintsInfoFUCHSIA bufferCollectionConstraints;
VkImageConstraintsInfoFlagsFUCHSIA flags;
} VkImageConstraintsInfoFUCHSIA;
typedef VkResult (VKAPI_PTR *PFN_vkCreateBufferCollectionFUCHSIA)(VkDevice device, const VkBufferCollectionCreateInfoFUCHSIA* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBufferCollectionFUCHSIA* pCollection);
typedef VkResult (VKAPI_PTR *PFN_vkSetBufferCollectionImageConstraintsFUCHSIA)(VkDevice device, VkBufferCollectionFUCHSIA collection, const VkImageConstraintsInfoFUCHSIA* pImageConstraintsInfo);
typedef VkResult (VKAPI_PTR *PFN_vkSetBufferCollectionBufferConstraintsFUCHSIA)(VkDevice device, VkBufferCollectionFUCHSIA collection, const VkBufferConstraintsInfoFUCHSIA* pBufferConstraintsInfo);
typedef void (VKAPI_PTR *PFN_vkDestroyBufferCollectionFUCHSIA)(VkDevice device, VkBufferCollectionFUCHSIA collection, const VkAllocationCallbacks* pAllocator);
typedef VkResult (VKAPI_PTR *PFN_vkGetBufferCollectionPropertiesFUCHSIA)(VkDevice device, VkBufferCollectionFUCHSIA collection, VkBufferCollectionPropertiesFUCHSIA* pProperties);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferCollectionFUCHSIA(
VkDevice device,
const VkBufferCollectionCreateInfoFUCHSIA* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkBufferCollectionFUCHSIA* pCollection);
VKAPI_ATTR VkResult VKAPI_CALL vkSetBufferCollectionImageConstraintsFUCHSIA(
VkDevice device,
VkBufferCollectionFUCHSIA collection,
const VkImageConstraintsInfoFUCHSIA* pImageConstraintsInfo);
VKAPI_ATTR VkResult VKAPI_CALL vkSetBufferCollectionBufferConstraintsFUCHSIA(
VkDevice device,
VkBufferCollectionFUCHSIA collection,
const VkBufferConstraintsInfoFUCHSIA* pBufferConstraintsInfo);
VKAPI_ATTR void VKAPI_CALL vkDestroyBufferCollectionFUCHSIA(
VkDevice device,
VkBufferCollectionFUCHSIA collection,
const VkAllocationCallbacks* pAllocator);
VKAPI_ATTR VkResult VKAPI_CALL vkGetBufferCollectionPropertiesFUCHSIA(
VkDevice device,
VkBufferCollectionFUCHSIA collection,
VkBufferCollectionPropertiesFUCHSIA* pProperties);
#endif
#ifdef __cplusplus
}
#endif
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,58 @@
#ifndef VULKAN_GGP_H_
#define VULKAN_GGP_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_GGP_stream_descriptor_surface 1
#define VK_GGP_STREAM_DESCRIPTOR_SURFACE_SPEC_VERSION 1
#define VK_GGP_STREAM_DESCRIPTOR_SURFACE_EXTENSION_NAME "VK_GGP_stream_descriptor_surface"
typedef VkFlags VkStreamDescriptorSurfaceCreateFlagsGGP;
typedef struct VkStreamDescriptorSurfaceCreateInfoGGP {
VkStructureType sType;
const void* pNext;
VkStreamDescriptorSurfaceCreateFlagsGGP flags;
GgpStreamDescriptor streamDescriptor;
} VkStreamDescriptorSurfaceCreateInfoGGP;
typedef VkResult (VKAPI_PTR *PFN_vkCreateStreamDescriptorSurfaceGGP)(VkInstance instance, const VkStreamDescriptorSurfaceCreateInfoGGP* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateStreamDescriptorSurfaceGGP(
VkInstance instance,
const VkStreamDescriptorSurfaceCreateInfoGGP* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
#endif
#define VK_GGP_frame_token 1
#define VK_GGP_FRAME_TOKEN_SPEC_VERSION 1
#define VK_GGP_FRAME_TOKEN_EXTENSION_NAME "VK_GGP_frame_token"
typedef struct VkPresentFrameTokenGGP {
VkStructureType sType;
const void* pNext;
GgpFrameToken frameToken;
} VkPresentFrameTokenGGP;
#ifdef __cplusplus
}
#endif
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,47 @@
#ifndef VULKAN_IOS_H_
#define VULKAN_IOS_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_MVK_ios_surface 1
#define VK_MVK_IOS_SURFACE_SPEC_VERSION 3
#define VK_MVK_IOS_SURFACE_EXTENSION_NAME "VK_MVK_ios_surface"
typedef VkFlags VkIOSSurfaceCreateFlagsMVK;
typedef struct VkIOSSurfaceCreateInfoMVK {
VkStructureType sType;
const void* pNext;
VkIOSSurfaceCreateFlagsMVK flags;
const void* pView;
} VkIOSSurfaceCreateInfoMVK;
typedef VkResult (VKAPI_PTR *PFN_vkCreateIOSSurfaceMVK)(VkInstance instance, const VkIOSSurfaceCreateInfoMVK* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateIOSSurfaceMVK(
VkInstance instance,
const VkIOSSurfaceCreateInfoMVK* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
#endif
#ifdef __cplusplus
}
#endif
#endif

View File

@ -0,0 +1,47 @@
#ifndef VULKAN_MACOS_H_
#define VULKAN_MACOS_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_MVK_macos_surface 1
#define VK_MVK_MACOS_SURFACE_SPEC_VERSION 3
#define VK_MVK_MACOS_SURFACE_EXTENSION_NAME "VK_MVK_macos_surface"
typedef VkFlags VkMacOSSurfaceCreateFlagsMVK;
typedef struct VkMacOSSurfaceCreateInfoMVK {
VkStructureType sType;
const void* pNext;
VkMacOSSurfaceCreateFlagsMVK flags;
const void* pView;
} VkMacOSSurfaceCreateInfoMVK;
typedef VkResult (VKAPI_PTR *PFN_vkCreateMacOSSurfaceMVK)(VkInstance instance, const VkMacOSSurfaceCreateInfoMVK* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateMacOSSurfaceMVK(
VkInstance instance,
const VkMacOSSurfaceCreateInfoMVK* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
#endif
#ifdef __cplusplus
}
#endif
#endif

View File

@ -0,0 +1,193 @@
#ifndef VULKAN_METAL_H_
#define VULKAN_METAL_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_EXT_metal_surface 1
#ifdef __OBJC__
@class CAMetalLayer;
#else
typedef void CAMetalLayer;
#endif
#define VK_EXT_METAL_SURFACE_SPEC_VERSION 1
#define VK_EXT_METAL_SURFACE_EXTENSION_NAME "VK_EXT_metal_surface"
typedef VkFlags VkMetalSurfaceCreateFlagsEXT;
typedef struct VkMetalSurfaceCreateInfoEXT {
VkStructureType sType;
const void* pNext;
VkMetalSurfaceCreateFlagsEXT flags;
const CAMetalLayer* pLayer;
} VkMetalSurfaceCreateInfoEXT;
typedef VkResult (VKAPI_PTR *PFN_vkCreateMetalSurfaceEXT)(VkInstance instance, const VkMetalSurfaceCreateInfoEXT* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateMetalSurfaceEXT(
VkInstance instance,
const VkMetalSurfaceCreateInfoEXT* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
#endif
#define VK_EXT_metal_objects 1
#ifdef __OBJC__
@protocol MTLDevice;
typedef id<MTLDevice> MTLDevice_id;
#else
typedef void* MTLDevice_id;
#endif
#ifdef __OBJC__
@protocol MTLCommandQueue;
typedef id<MTLCommandQueue> MTLCommandQueue_id;
#else
typedef void* MTLCommandQueue_id;
#endif
#ifdef __OBJC__
@protocol MTLBuffer;
typedef id<MTLBuffer> MTLBuffer_id;
#else
typedef void* MTLBuffer_id;
#endif
#ifdef __OBJC__
@protocol MTLTexture;
typedef id<MTLTexture> MTLTexture_id;
#else
typedef void* MTLTexture_id;
#endif
typedef struct __IOSurface* IOSurfaceRef;
#ifdef __OBJC__
@protocol MTLSharedEvent;
typedef id<MTLSharedEvent> MTLSharedEvent_id;
#else
typedef void* MTLSharedEvent_id;
#endif
#define VK_EXT_METAL_OBJECTS_SPEC_VERSION 1
#define VK_EXT_METAL_OBJECTS_EXTENSION_NAME "VK_EXT_metal_objects"
typedef enum VkExportMetalObjectTypeFlagBitsEXT {
VK_EXPORT_METAL_OBJECT_TYPE_METAL_DEVICE_BIT_EXT = 0x00000001,
VK_EXPORT_METAL_OBJECT_TYPE_METAL_COMMAND_QUEUE_BIT_EXT = 0x00000002,
VK_EXPORT_METAL_OBJECT_TYPE_METAL_BUFFER_BIT_EXT = 0x00000004,
VK_EXPORT_METAL_OBJECT_TYPE_METAL_TEXTURE_BIT_EXT = 0x00000008,
VK_EXPORT_METAL_OBJECT_TYPE_METAL_IOSURFACE_BIT_EXT = 0x00000010,
VK_EXPORT_METAL_OBJECT_TYPE_METAL_SHARED_EVENT_BIT_EXT = 0x00000020,
VK_EXPORT_METAL_OBJECT_TYPE_FLAG_BITS_MAX_ENUM_EXT = 0x7FFFFFFF
} VkExportMetalObjectTypeFlagBitsEXT;
typedef VkFlags VkExportMetalObjectTypeFlagsEXT;
typedef struct VkExportMetalObjectCreateInfoEXT {
VkStructureType sType;
const void* pNext;
VkExportMetalObjectTypeFlagBitsEXT exportObjectType;
} VkExportMetalObjectCreateInfoEXT;
typedef struct VkExportMetalObjectsInfoEXT {
VkStructureType sType;
const void* pNext;
} VkExportMetalObjectsInfoEXT;
typedef struct VkExportMetalDeviceInfoEXT {
VkStructureType sType;
const void* pNext;
MTLDevice_id mtlDevice;
} VkExportMetalDeviceInfoEXT;
typedef struct VkExportMetalCommandQueueInfoEXT {
VkStructureType sType;
const void* pNext;
VkQueue queue;
MTLCommandQueue_id mtlCommandQueue;
} VkExportMetalCommandQueueInfoEXT;
typedef struct VkExportMetalBufferInfoEXT {
VkStructureType sType;
const void* pNext;
VkDeviceMemory memory;
MTLBuffer_id mtlBuffer;
} VkExportMetalBufferInfoEXT;
typedef struct VkImportMetalBufferInfoEXT {
VkStructureType sType;
const void* pNext;
MTLBuffer_id mtlBuffer;
} VkImportMetalBufferInfoEXT;
typedef struct VkExportMetalTextureInfoEXT {
VkStructureType sType;
const void* pNext;
VkImage image;
VkImageView imageView;
VkBufferView bufferView;
VkImageAspectFlagBits plane;
MTLTexture_id mtlTexture;
} VkExportMetalTextureInfoEXT;
typedef struct VkImportMetalTextureInfoEXT {
VkStructureType sType;
const void* pNext;
VkImageAspectFlagBits plane;
MTLTexture_id mtlTexture;
} VkImportMetalTextureInfoEXT;
typedef struct VkExportMetalIOSurfaceInfoEXT {
VkStructureType sType;
const void* pNext;
VkImage image;
IOSurfaceRef ioSurface;
} VkExportMetalIOSurfaceInfoEXT;
typedef struct VkImportMetalIOSurfaceInfoEXT {
VkStructureType sType;
const void* pNext;
IOSurfaceRef ioSurface;
} VkImportMetalIOSurfaceInfoEXT;
typedef struct VkExportMetalSharedEventInfoEXT {
VkStructureType sType;
const void* pNext;
VkSemaphore semaphore;
VkEvent event;
MTLSharedEvent_id mtlSharedEvent;
} VkExportMetalSharedEventInfoEXT;
typedef struct VkImportMetalSharedEventInfoEXT {
VkStructureType sType;
const void* pNext;
MTLSharedEvent_id mtlSharedEvent;
} VkImportMetalSharedEventInfoEXT;
typedef void (VKAPI_PTR *PFN_vkExportMetalObjectsEXT)(VkDevice device, VkExportMetalObjectsInfoEXT* pMetalObjectsInfo);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR void VKAPI_CALL vkExportMetalObjectsEXT(
VkDevice device,
VkExportMetalObjectsInfoEXT* pMetalObjectsInfo);
#endif
#ifdef __cplusplus
}
#endif
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,54 @@
#ifndef VULKAN_SCREEN_H_
#define VULKAN_SCREEN_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_QNX_screen_surface 1
#define VK_QNX_SCREEN_SURFACE_SPEC_VERSION 1
#define VK_QNX_SCREEN_SURFACE_EXTENSION_NAME "VK_QNX_screen_surface"
typedef VkFlags VkScreenSurfaceCreateFlagsQNX;
typedef struct VkScreenSurfaceCreateInfoQNX {
VkStructureType sType;
const void* pNext;
VkScreenSurfaceCreateFlagsQNX flags;
struct _screen_context* context;
struct _screen_window* window;
} VkScreenSurfaceCreateInfoQNX;
typedef VkResult (VKAPI_PTR *PFN_vkCreateScreenSurfaceQNX)(VkInstance instance, const VkScreenSurfaceCreateInfoQNX* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceScreenPresentationSupportQNX)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, struct _screen_window* window);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateScreenSurfaceQNX(
VkInstance instance,
const VkScreenSurfaceCreateInfoQNX* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceScreenPresentationSupportQNX(
VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
struct _screen_window* window);
#endif
#ifdef __cplusplus
}
#endif
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,47 @@
#ifndef VULKAN_VI_H_
#define VULKAN_VI_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_NN_vi_surface 1
#define VK_NN_VI_SURFACE_SPEC_VERSION 1
#define VK_NN_VI_SURFACE_EXTENSION_NAME "VK_NN_vi_surface"
typedef VkFlags VkViSurfaceCreateFlagsNN;
typedef struct VkViSurfaceCreateInfoNN {
VkStructureType sType;
const void* pNext;
VkViSurfaceCreateFlagsNN flags;
void* window;
} VkViSurfaceCreateInfoNN;
typedef VkResult (VKAPI_PTR *PFN_vkCreateViSurfaceNN)(VkInstance instance, const VkViSurfaceCreateInfoNN* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateViSurfaceNN(
VkInstance instance,
const VkViSurfaceCreateInfoNN* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
#endif
#ifdef __cplusplus
}
#endif
#endif

View File

@ -0,0 +1,54 @@
#ifndef VULKAN_WAYLAND_H_
#define VULKAN_WAYLAND_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_KHR_wayland_surface 1
#define VK_KHR_WAYLAND_SURFACE_SPEC_VERSION 6
#define VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME "VK_KHR_wayland_surface"
typedef VkFlags VkWaylandSurfaceCreateFlagsKHR;
typedef struct VkWaylandSurfaceCreateInfoKHR {
VkStructureType sType;
const void* pNext;
VkWaylandSurfaceCreateFlagsKHR flags;
struct wl_display* display;
struct wl_surface* surface;
} VkWaylandSurfaceCreateInfoKHR;
typedef VkResult (VKAPI_PTR *PFN_vkCreateWaylandSurfaceKHR)(VkInstance instance, const VkWaylandSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, struct wl_display* display);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateWaylandSurfaceKHR(
VkInstance instance,
const VkWaylandSurfaceCreateInfoKHR* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWaylandPresentationSupportKHR(
VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
struct wl_display* display);
#endif
#ifdef __cplusplus
}
#endif
#endif

View File

@ -0,0 +1,333 @@
#ifndef VULKAN_WIN32_H_
#define VULKAN_WIN32_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_KHR_win32_surface 1
#define VK_KHR_WIN32_SURFACE_SPEC_VERSION 6
#define VK_KHR_WIN32_SURFACE_EXTENSION_NAME "VK_KHR_win32_surface"
typedef VkFlags VkWin32SurfaceCreateFlagsKHR;
typedef struct VkWin32SurfaceCreateInfoKHR {
VkStructureType sType;
const void* pNext;
VkWin32SurfaceCreateFlagsKHR flags;
HINSTANCE hinstance;
HWND hwnd;
} VkWin32SurfaceCreateInfoKHR;
typedef VkResult (VKAPI_PTR *PFN_vkCreateWin32SurfaceKHR)(VkInstance instance, const VkWin32SurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateWin32SurfaceKHR(
VkInstance instance,
const VkWin32SurfaceCreateInfoKHR* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWin32PresentationSupportKHR(
VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex);
#endif
#define VK_KHR_external_memory_win32 1
#define VK_KHR_EXTERNAL_MEMORY_WIN32_SPEC_VERSION 1
#define VK_KHR_EXTERNAL_MEMORY_WIN32_EXTENSION_NAME "VK_KHR_external_memory_win32"
typedef struct VkImportMemoryWin32HandleInfoKHR {
VkStructureType sType;
const void* pNext;
VkExternalMemoryHandleTypeFlagBits handleType;
HANDLE handle;
LPCWSTR name;
} VkImportMemoryWin32HandleInfoKHR;
typedef struct VkExportMemoryWin32HandleInfoKHR {
VkStructureType sType;
const void* pNext;
const SECURITY_ATTRIBUTES* pAttributes;
DWORD dwAccess;
LPCWSTR name;
} VkExportMemoryWin32HandleInfoKHR;
typedef struct VkMemoryWin32HandlePropertiesKHR {
VkStructureType sType;
void* pNext;
uint32_t memoryTypeBits;
} VkMemoryWin32HandlePropertiesKHR;
typedef struct VkMemoryGetWin32HandleInfoKHR {
VkStructureType sType;
const void* pNext;
VkDeviceMemory memory;
VkExternalMemoryHandleTypeFlagBits handleType;
} VkMemoryGetWin32HandleInfoKHR;
typedef VkResult (VKAPI_PTR *PFN_vkGetMemoryWin32HandleKHR)(VkDevice device, const VkMemoryGetWin32HandleInfoKHR* pGetWin32HandleInfo, HANDLE* pHandle);
typedef VkResult (VKAPI_PTR *PFN_vkGetMemoryWin32HandlePropertiesKHR)(VkDevice device, VkExternalMemoryHandleTypeFlagBits handleType, HANDLE handle, VkMemoryWin32HandlePropertiesKHR* pMemoryWin32HandleProperties);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkGetMemoryWin32HandleKHR(
VkDevice device,
const VkMemoryGetWin32HandleInfoKHR* pGetWin32HandleInfo,
HANDLE* pHandle);
VKAPI_ATTR VkResult VKAPI_CALL vkGetMemoryWin32HandlePropertiesKHR(
VkDevice device,
VkExternalMemoryHandleTypeFlagBits handleType,
HANDLE handle,
VkMemoryWin32HandlePropertiesKHR* pMemoryWin32HandleProperties);
#endif
#define VK_KHR_win32_keyed_mutex 1
#define VK_KHR_WIN32_KEYED_MUTEX_SPEC_VERSION 1
#define VK_KHR_WIN32_KEYED_MUTEX_EXTENSION_NAME "VK_KHR_win32_keyed_mutex"
typedef struct VkWin32KeyedMutexAcquireReleaseInfoKHR {
VkStructureType sType;
const void* pNext;
uint32_t acquireCount;
const VkDeviceMemory* pAcquireSyncs;
const uint64_t* pAcquireKeys;
const uint32_t* pAcquireTimeouts;
uint32_t releaseCount;
const VkDeviceMemory* pReleaseSyncs;
const uint64_t* pReleaseKeys;
} VkWin32KeyedMutexAcquireReleaseInfoKHR;
#define VK_KHR_external_semaphore_win32 1
#define VK_KHR_EXTERNAL_SEMAPHORE_WIN32_SPEC_VERSION 1
#define VK_KHR_EXTERNAL_SEMAPHORE_WIN32_EXTENSION_NAME "VK_KHR_external_semaphore_win32"
typedef struct VkImportSemaphoreWin32HandleInfoKHR {
VkStructureType sType;
const void* pNext;
VkSemaphore semaphore;
VkSemaphoreImportFlags flags;
VkExternalSemaphoreHandleTypeFlagBits handleType;
HANDLE handle;
LPCWSTR name;
} VkImportSemaphoreWin32HandleInfoKHR;
typedef struct VkExportSemaphoreWin32HandleInfoKHR {
VkStructureType sType;
const void* pNext;
const SECURITY_ATTRIBUTES* pAttributes;
DWORD dwAccess;
LPCWSTR name;
} VkExportSemaphoreWin32HandleInfoKHR;
typedef struct VkD3D12FenceSubmitInfoKHR {
VkStructureType sType;
const void* pNext;
uint32_t waitSemaphoreValuesCount;
const uint64_t* pWaitSemaphoreValues;
uint32_t signalSemaphoreValuesCount;
const uint64_t* pSignalSemaphoreValues;
} VkD3D12FenceSubmitInfoKHR;
typedef struct VkSemaphoreGetWin32HandleInfoKHR {
VkStructureType sType;
const void* pNext;
VkSemaphore semaphore;
VkExternalSemaphoreHandleTypeFlagBits handleType;
} VkSemaphoreGetWin32HandleInfoKHR;
typedef VkResult (VKAPI_PTR *PFN_vkImportSemaphoreWin32HandleKHR)(VkDevice device, const VkImportSemaphoreWin32HandleInfoKHR* pImportSemaphoreWin32HandleInfo);
typedef VkResult (VKAPI_PTR *PFN_vkGetSemaphoreWin32HandleKHR)(VkDevice device, const VkSemaphoreGetWin32HandleInfoKHR* pGetWin32HandleInfo, HANDLE* pHandle);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkImportSemaphoreWin32HandleKHR(
VkDevice device,
const VkImportSemaphoreWin32HandleInfoKHR* pImportSemaphoreWin32HandleInfo);
VKAPI_ATTR VkResult VKAPI_CALL vkGetSemaphoreWin32HandleKHR(
VkDevice device,
const VkSemaphoreGetWin32HandleInfoKHR* pGetWin32HandleInfo,
HANDLE* pHandle);
#endif
#define VK_KHR_external_fence_win32 1
#define VK_KHR_EXTERNAL_FENCE_WIN32_SPEC_VERSION 1
#define VK_KHR_EXTERNAL_FENCE_WIN32_EXTENSION_NAME "VK_KHR_external_fence_win32"
typedef struct VkImportFenceWin32HandleInfoKHR {
VkStructureType sType;
const void* pNext;
VkFence fence;
VkFenceImportFlags flags;
VkExternalFenceHandleTypeFlagBits handleType;
HANDLE handle;
LPCWSTR name;
} VkImportFenceWin32HandleInfoKHR;
typedef struct VkExportFenceWin32HandleInfoKHR {
VkStructureType sType;
const void* pNext;
const SECURITY_ATTRIBUTES* pAttributes;
DWORD dwAccess;
LPCWSTR name;
} VkExportFenceWin32HandleInfoKHR;
typedef struct VkFenceGetWin32HandleInfoKHR {
VkStructureType sType;
const void* pNext;
VkFence fence;
VkExternalFenceHandleTypeFlagBits handleType;
} VkFenceGetWin32HandleInfoKHR;
typedef VkResult (VKAPI_PTR *PFN_vkImportFenceWin32HandleKHR)(VkDevice device, const VkImportFenceWin32HandleInfoKHR* pImportFenceWin32HandleInfo);
typedef VkResult (VKAPI_PTR *PFN_vkGetFenceWin32HandleKHR)(VkDevice device, const VkFenceGetWin32HandleInfoKHR* pGetWin32HandleInfo, HANDLE* pHandle);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkImportFenceWin32HandleKHR(
VkDevice device,
const VkImportFenceWin32HandleInfoKHR* pImportFenceWin32HandleInfo);
VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceWin32HandleKHR(
VkDevice device,
const VkFenceGetWin32HandleInfoKHR* pGetWin32HandleInfo,
HANDLE* pHandle);
#endif
#define VK_NV_external_memory_win32 1
#define VK_NV_EXTERNAL_MEMORY_WIN32_SPEC_VERSION 1
#define VK_NV_EXTERNAL_MEMORY_WIN32_EXTENSION_NAME "VK_NV_external_memory_win32"
typedef struct VkImportMemoryWin32HandleInfoNV {
VkStructureType sType;
const void* pNext;
VkExternalMemoryHandleTypeFlagsNV handleType;
HANDLE handle;
} VkImportMemoryWin32HandleInfoNV;
typedef struct VkExportMemoryWin32HandleInfoNV {
VkStructureType sType;
const void* pNext;
const SECURITY_ATTRIBUTES* pAttributes;
DWORD dwAccess;
} VkExportMemoryWin32HandleInfoNV;
typedef VkResult (VKAPI_PTR *PFN_vkGetMemoryWin32HandleNV)(VkDevice device, VkDeviceMemory memory, VkExternalMemoryHandleTypeFlagsNV handleType, HANDLE* pHandle);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkGetMemoryWin32HandleNV(
VkDevice device,
VkDeviceMemory memory,
VkExternalMemoryHandleTypeFlagsNV handleType,
HANDLE* pHandle);
#endif
#define VK_NV_win32_keyed_mutex 1
#define VK_NV_WIN32_KEYED_MUTEX_SPEC_VERSION 2
#define VK_NV_WIN32_KEYED_MUTEX_EXTENSION_NAME "VK_NV_win32_keyed_mutex"
typedef struct VkWin32KeyedMutexAcquireReleaseInfoNV {
VkStructureType sType;
const void* pNext;
uint32_t acquireCount;
const VkDeviceMemory* pAcquireSyncs;
const uint64_t* pAcquireKeys;
const uint32_t* pAcquireTimeoutMilliseconds;
uint32_t releaseCount;
const VkDeviceMemory* pReleaseSyncs;
const uint64_t* pReleaseKeys;
} VkWin32KeyedMutexAcquireReleaseInfoNV;
#define VK_EXT_full_screen_exclusive 1
#define VK_EXT_FULL_SCREEN_EXCLUSIVE_SPEC_VERSION 4
#define VK_EXT_FULL_SCREEN_EXCLUSIVE_EXTENSION_NAME "VK_EXT_full_screen_exclusive"
typedef enum VkFullScreenExclusiveEXT {
VK_FULL_SCREEN_EXCLUSIVE_DEFAULT_EXT = 0,
VK_FULL_SCREEN_EXCLUSIVE_ALLOWED_EXT = 1,
VK_FULL_SCREEN_EXCLUSIVE_DISALLOWED_EXT = 2,
VK_FULL_SCREEN_EXCLUSIVE_APPLICATION_CONTROLLED_EXT = 3,
VK_FULL_SCREEN_EXCLUSIVE_MAX_ENUM_EXT = 0x7FFFFFFF
} VkFullScreenExclusiveEXT;
typedef struct VkSurfaceFullScreenExclusiveInfoEXT {
VkStructureType sType;
void* pNext;
VkFullScreenExclusiveEXT fullScreenExclusive;
} VkSurfaceFullScreenExclusiveInfoEXT;
typedef struct VkSurfaceCapabilitiesFullScreenExclusiveEXT {
VkStructureType sType;
void* pNext;
VkBool32 fullScreenExclusiveSupported;
} VkSurfaceCapabilitiesFullScreenExclusiveEXT;
typedef struct VkSurfaceFullScreenExclusiveWin32InfoEXT {
VkStructureType sType;
const void* pNext;
HMONITOR hmonitor;
} VkSurfaceFullScreenExclusiveWin32InfoEXT;
typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceSurfacePresentModes2EXT)(VkPhysicalDevice physicalDevice, const VkPhysicalDeviceSurfaceInfo2KHR* pSurfaceInfo, uint32_t* pPresentModeCount, VkPresentModeKHR* pPresentModes);
typedef VkResult (VKAPI_PTR *PFN_vkAcquireFullScreenExclusiveModeEXT)(VkDevice device, VkSwapchainKHR swapchain);
typedef VkResult (VKAPI_PTR *PFN_vkReleaseFullScreenExclusiveModeEXT)(VkDevice device, VkSwapchainKHR swapchain);
typedef VkResult (VKAPI_PTR *PFN_vkGetDeviceGroupSurfacePresentModes2EXT)(VkDevice device, const VkPhysicalDeviceSurfaceInfo2KHR* pSurfaceInfo, VkDeviceGroupPresentModeFlagsKHR* pModes);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfacePresentModes2EXT(
VkPhysicalDevice physicalDevice,
const VkPhysicalDeviceSurfaceInfo2KHR* pSurfaceInfo,
uint32_t* pPresentModeCount,
VkPresentModeKHR* pPresentModes);
VKAPI_ATTR VkResult VKAPI_CALL vkAcquireFullScreenExclusiveModeEXT(
VkDevice device,
VkSwapchainKHR swapchain);
VKAPI_ATTR VkResult VKAPI_CALL vkReleaseFullScreenExclusiveModeEXT(
VkDevice device,
VkSwapchainKHR swapchain);
VKAPI_ATTR VkResult VKAPI_CALL vkGetDeviceGroupSurfacePresentModes2EXT(
VkDevice device,
const VkPhysicalDeviceSurfaceInfo2KHR* pSurfaceInfo,
VkDeviceGroupPresentModeFlagsKHR* pModes);
#endif
#define VK_NV_acquire_winrt_display 1
#define VK_NV_ACQUIRE_WINRT_DISPLAY_SPEC_VERSION 1
#define VK_NV_ACQUIRE_WINRT_DISPLAY_EXTENSION_NAME "VK_NV_acquire_winrt_display"
typedef VkResult (VKAPI_PTR *PFN_vkAcquireWinrtDisplayNV)(VkPhysicalDevice physicalDevice, VkDisplayKHR display);
typedef VkResult (VKAPI_PTR *PFN_vkGetWinrtDisplayNV)(VkPhysicalDevice physicalDevice, uint32_t deviceRelativeId, VkDisplayKHR* pDisplay);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkAcquireWinrtDisplayNV(
VkPhysicalDevice physicalDevice,
VkDisplayKHR display);
VKAPI_ATTR VkResult VKAPI_CALL vkGetWinrtDisplayNV(
VkPhysicalDevice physicalDevice,
uint32_t deviceRelativeId,
VkDisplayKHR* pDisplay);
#endif
#ifdef __cplusplus
}
#endif
#endif

View File

@ -0,0 +1,55 @@
#ifndef VULKAN_XCB_H_
#define VULKAN_XCB_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_KHR_xcb_surface 1
#define VK_KHR_XCB_SURFACE_SPEC_VERSION 6
#define VK_KHR_XCB_SURFACE_EXTENSION_NAME "VK_KHR_xcb_surface"
typedef VkFlags VkXcbSurfaceCreateFlagsKHR;
typedef struct VkXcbSurfaceCreateInfoKHR {
VkStructureType sType;
const void* pNext;
VkXcbSurfaceCreateFlagsKHR flags;
xcb_connection_t* connection;
xcb_window_t window;
} VkXcbSurfaceCreateInfoKHR;
typedef VkResult (VKAPI_PTR *PFN_vkCreateXcbSurfaceKHR)(VkInstance instance, const VkXcbSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, xcb_connection_t* connection, xcb_visualid_t visual_id);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateXcbSurfaceKHR(
VkInstance instance,
const VkXcbSurfaceCreateInfoKHR* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXcbPresentationSupportKHR(
VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
xcb_connection_t* connection,
xcb_visualid_t visual_id);
#endif
#ifdef __cplusplus
}
#endif
#endif

View File

@ -0,0 +1,55 @@
#ifndef VULKAN_XLIB_H_
#define VULKAN_XLIB_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_KHR_xlib_surface 1
#define VK_KHR_XLIB_SURFACE_SPEC_VERSION 6
#define VK_KHR_XLIB_SURFACE_EXTENSION_NAME "VK_KHR_xlib_surface"
typedef VkFlags VkXlibSurfaceCreateFlagsKHR;
typedef struct VkXlibSurfaceCreateInfoKHR {
VkStructureType sType;
const void* pNext;
VkXlibSurfaceCreateFlagsKHR flags;
Display* dpy;
Window window;
} VkXlibSurfaceCreateInfoKHR;
typedef VkResult (VKAPI_PTR *PFN_vkCreateXlibSurfaceKHR)(VkInstance instance, const VkXlibSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, Display* dpy, VisualID visualID);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkCreateXlibSurfaceKHR(
VkInstance instance,
const VkXlibSurfaceCreateInfoKHR* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSurfaceKHR* pSurface);
VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXlibPresentationSupportKHR(
VkPhysicalDevice physicalDevice,
uint32_t queueFamilyIndex,
Display* dpy,
VisualID visualID);
#endif
#ifdef __cplusplus
}
#endif
#endif

View File

@ -0,0 +1,45 @@
#ifndef VULKAN_XLIB_XRANDR_H_
#define VULKAN_XLIB_XRANDR_H_ 1
/*
** Copyright 2015-2022 The Khronos Group Inc.
**
** SPDX-License-Identifier: Apache-2.0
*/
/*
** This header is generated from the Khronos Vulkan XML API Registry.
**
*/
#ifdef __cplusplus
extern "C" {
#endif
#define VK_EXT_acquire_xlib_display 1
#define VK_EXT_ACQUIRE_XLIB_DISPLAY_SPEC_VERSION 1
#define VK_EXT_ACQUIRE_XLIB_DISPLAY_EXTENSION_NAME "VK_EXT_acquire_xlib_display"
typedef VkResult (VKAPI_PTR *PFN_vkAcquireXlibDisplayEXT)(VkPhysicalDevice physicalDevice, Display* dpy, VkDisplayKHR display);
typedef VkResult (VKAPI_PTR *PFN_vkGetRandROutputDisplayEXT)(VkPhysicalDevice physicalDevice, Display* dpy, RROutput rrOutput, VkDisplayKHR* pDisplay);
#ifndef VK_NO_PROTOTYPES
VKAPI_ATTR VkResult VKAPI_CALL vkAcquireXlibDisplayEXT(
VkPhysicalDevice physicalDevice,
Display* dpy,
VkDisplayKHR display);
VKAPI_ATTR VkResult VKAPI_CALL vkGetRandROutputDisplayEXT(
VkPhysicalDevice physicalDevice,
Display* dpy,
RROutput rrOutput,
VkDisplayKHR* pDisplay);
#endif
#ifdef __cplusplus
}
#endif
#endif

View File

@ -78,9 +78,19 @@ SplitPath splitpath(string str)
string makepath(const string &drive, const string &dir, const string &stem, const string &ext) string makepath(const string &drive, const string &dir, const string &stem, const string &ext)
{ {
auto dot_position = ext.find('.');
if (dot_position == string::npos)
{
fs::path path(drive);
path = path / dir / stem;
path.replace_extension(ext);
return path.string();
}
auto filename = stem + ext;
fs::path path(drive); fs::path path(drive);
path = path / dir / stem; path = path / dir / filename;
path.replace_extension(ext);
return path.string(); return path.string();
} }
@ -123,7 +133,7 @@ SplitPath splitpath(string path)
return output; return output;
} }
string makepath(string drive, string dir, string stem, string ext) string makepath(const string &drive, const string &dir, const string &stem, const string &ext)
{ {
string output; string output;
@ -148,7 +158,7 @@ string makepath(string drive, string dir, string stem, string ext)
if (!ext.empty()) if (!ext.empty())
{ {
if (ext[0] != '.') if (ext.find('.') == string::npos)
output += '.'; output += '.';
output += ext; output += ext;
} }
@ -215,7 +225,7 @@ void _makepath(char *path, const char *drive, const char *dir, const char *fname
if (ext && *ext) if (ext && *ext)
{ {
if (*ext != '.') if (!strchr(ext, '.'))
strcat(path, "."); strcat(path, ".");
strcat(path, ext); strcat(path, ext);
} }

View File

@ -36,7 +36,7 @@ add_compile_definitions(HAVE_LIBPNG
SNES9XLOCALEDIR=\"${LOCALEDIR}\") SNES9XLOCALEDIR=\"${LOCALEDIR}\")
set(INCLUDES ../apu/bapu ../ src) set(INCLUDES ../apu/bapu ../ src)
set(SOURCES) set(SOURCES)
set(ARGS -Wall -W -Wno-unused-parameter) set(ARGS -Wall -Wno-unused-parameter)
set(LIBS) set(LIBS)
set(DEFINES) set(DEFINES)
@ -85,6 +85,39 @@ if(USE_SLANG)
spirv-cross-cpp) spirv-cross-cpp)
list(APPEND DEFINES "USE_SLANG") list(APPEND DEFINES "USE_SLANG")
list(APPEND INCLUDES "../external/glslang") list(APPEND INCLUDES "../external/glslang")
list(APPEND DEFINES "VK_USE_PLATFORM_XLIB_KHR"
"VK_USE_PLATFORM_WAYLAND_KHR"
"VULKAN_HPP_DISPATCH_LOADER_DYNAMIC=1"
"VMA_DYNAMIC_VULKAN_FUNCTIONS=1"
"VMA_STATIC_VULKAN_FUNCTIONS=0")
list(APPEND INCLUDES ../external/vulkan-headers/include)
list(APPEND INCLUDES ../external/VulkanMemoryAllocator-Hpp/include)
list(APPEND INCLUDES ../external/stb)
list(APPEND SOURCES ../external/stb/stb_image_implementation.cpp)
list(APPEND SOURCES ../vulkan/slang_helpers.cpp
../vulkan/slang_helpers.hpp
../vulkan/slang_shader.cpp
../vulkan/slang_shader.hpp
../vulkan/slang_preset.cpp
../vulkan/slang_preset.hpp
../vulkan/slang_preset_ini.cpp
../vulkan/slang_preset_ini.hpp
../vulkan/vulkan_hpp_storage.cpp
../vulkan/vk_mem_alloc_implementation.cpp
../vulkan/vulkan_context.cpp
../vulkan/vulkan_context.hpp
../vulkan/vulkan_texture.cpp
../vulkan/vulkan_texture.hpp
../vulkan/vulkan_swapchain.cpp
../vulkan/vulkan_swapchain.hpp
../vulkan/vulkan_slang_pipeline.cpp
../vulkan/vulkan_slang_pipeline.hpp
../vulkan/vulkan_pipeline_image.cpp
../vulkan/vulkan_pipeline_image.hpp
../vulkan/vulkan_shader_chain.cpp
../vulkan/vulkan_shader_chain.hpp)
endif() endif()
if(USE_WAYLAND) if(USE_WAYLAND)
@ -189,6 +222,8 @@ list(APPEND SOURCES
src/gtk_control.h src/gtk_control.h
src/gtk_display.cpp src/gtk_display.cpp
src/gtk_display_driver_gtk.cpp src/gtk_display_driver_gtk.cpp
src/gtk_display_driver_vulkan.cpp
src/gtk_display_driver_vulkan.h
src/gtk_display_driver_gtk.h src/gtk_display_driver_gtk.h
src/gtk_display_driver.h src/gtk_display_driver.h
src/gtk_display.h src/gtk_display.h
@ -318,6 +353,39 @@ target_compile_options(snes9x-gtk PRIVATE ${ARGS})
target_link_libraries(snes9x-gtk PRIVATE ${LIBS}) target_link_libraries(snes9x-gtk PRIVATE ${LIBS})
target_compile_definitions(snes9x-gtk PRIVATE ${DEFINES}) target_compile_definitions(snes9x-gtk PRIVATE ${DEFINES})
add_executable(slang_test ../vulkan/slang_helpers.cpp
../vulkan/slang_helpers.hpp
../vulkan/slang_shader.cpp
../vulkan/slang_shader.hpp
../vulkan/slang_preset.cpp
../vulkan/slang_preset.hpp
../vulkan/slang_preset_ini.cpp
../vulkan/slang_preset_ini.hpp
../vulkan/vulkan_hpp_storage.cpp
../vulkan/slang_preset_test.cpp
../conffile.cpp
../stream.cpp)
#add_executable(vulkan_test ../vulkan/slang_helpers.cpp
# ../vulkan/slang_helpers.hpp
# ../vulkan/slang_shader.cpp
# ../vulkan/slang_shader.hpp
# ../vulkan/slang_preset.cpp
# ../vulkan/slang_preset.hpp
# ../vulkan/vulkan_hpp_storage.cpp
# ../vulkan/test.cpp
# ../vulkan/vk2d.cpp
# ../conffile.cpp
# ../stream.cpp)
#target_include_directories(vulkan_test PRIVATE ${INCLUDES})
#target_compile_options(vulkan_test PRIVATE ${ARGS})
#target_compile_definitions(vulkan_test PRIVATE ${DEFINES} "VULKAN_HPP_NO_EXCEPTIONS" "VULKAN_HPP_ASSERT_ON_RESULT=")
#target_link_libraries(vulkan_test PRIVATE ${LIBS})
target_include_directories(slang_test PRIVATE ${INCLUDES})
target_compile_options(slang_test PRIVATE ${ARGS})
target_compile_definitions(slang_test PRIVATE ${DEFINES})
target_link_libraries(slang_test PRIVATE ${LIBS})
install(TARGETS snes9x-gtk) install(TARGETS snes9x-gtk)
install(FILES ../data/cheats.bml DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/${CMAKE_INSTALL_DATADIR}) install(FILES ../data/cheats.bml DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/${CMAKE_INSTALL_DATADIR})
install(FILES data/snes9x-gtk.desktop DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/applications) install(FILES data/snes9x-gtk.desktop DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/applications)

View File

@ -4,13 +4,6 @@
For further information, consult the LICENSE file in the root directory. For further information, consult the LICENSE file in the root directory.
\*****************************************************************************/ \*****************************************************************************/
#include "snes9x.h"
#include "memmap.h"
#include "cpuops.h"
#include "dma.h"
#include "apu/apu.h"
#include "fxemu.h"
#include "snapshot.h"
#ifdef DEBUGGER #ifdef DEBUGGER
#include "debug.h" #include "debug.h"
#include "missing.h" #include "missing.h"

View File

@ -13,7 +13,6 @@
#include "fmt/format.h" #include "fmt/format.h"
#include "gtk_config.h" #include "gtk_config.h"
#include "gtk_s9x.h" #include "gtk_s9x.h"
#include "gtk_sound.h"
#include "gtk_display.h" #include "gtk_display.h"
#include "conffile.h" #include "conffile.h"
#include "cheats.h" #include "cheats.h"
@ -82,7 +81,7 @@ int Snes9xConfig::load_defaults()
rom_loaded = false; rom_loaded = false;
multithreading = false; multithreading = false;
splash_image = SPLASH_IMAGE_STARFIELD; splash_image = SPLASH_IMAGE_STARFIELD;
display_driver = "OpenGL"; display_driver = "opengl";
allow_opengl = false; allow_opengl = false;
allow_xv = false; allow_xv = false;
allow_xrandr = false; allow_xrandr = false;
@ -227,7 +226,7 @@ int Snes9xConfig::save_config_file()
outint("ScanlineFilterIntensity", scanline_filter_intensity, "0: 0%, 1: 12.5%, 2: 25%, 3: 50%, 4: 100%"); outint("ScanlineFilterIntensity", scanline_filter_intensity, "0: 0%, 1: 12.5%, 2: 25%, 3: 50%, 4: 100%");
outint("HiresEffect", hires_effect, "0: Downscale to low-res, 1: Leave as-is, 2: Upscale low-res screens"); outint("HiresEffect", hires_effect, "0: Downscale to low-res, 1: Leave as-is, 2: Upscale low-res screens");
outint("NumberOfThreads", num_threads); outint("NumberOfThreads", num_threads);
outstring("HardwareAcceleration", display_driver, "None, OpenGL, Xv, Vulkan"); outstring("HardwareAcceleration", display_driver, "none, opengl, xv, vulkan");
outint("SplashBackground", splash_image, "0: Black, 1: Color bars, 2: Pattern, 3: Blue, 4: Default"); outint("SplashBackground", splash_image, "0: Black, 1: Color bars, 2: Pattern, 3: Blue, 4: Default");
section = "NTSC"; section = "NTSC";

View File

@ -24,6 +24,7 @@
#endif #endif
#include "gtk_display_driver_opengl.h" #include "gtk_display_driver_opengl.h"
#include "gtk_display_driver_vulkan.h"
void filter_scanlines(uint8 *, int, uint8 *, int, int, int); void filter_scanlines(uint8 *, int, uint8 *, int, int, int);
void filter_2x(uint8 *, int, uint8 *, int, int, int); void filter_2x(uint8 *, int, uint8 *, int, int, int);
@ -801,17 +802,21 @@ void S9xQueryDrivers()
auto &dd = gui_config->display_drivers; auto &dd = gui_config->display_drivers;
dd.clear(); dd.clear();
dd.push_back("None"); dd.push_back("none");
if (gui_config->allow_opengl) if (gui_config->allow_opengl)
dd.push_back("OpenGL"); dd.push_back("opengl");
if (gui_config->allow_xv) if (gui_config->allow_xv)
dd.push_back("Xv"); dd.push_back("xv");
dd.push_back("vulkan");
} }
bool8 S9xDeinitUpdate(int width, int height) bool8 S9xDeinitUpdate(int width, int height)
{ {
int yoffset = 0; int yoffset = 0;
if (width <= 0 || height <= 0)
return false;
if (top_level->last_height > height) if (top_level->last_height > height)
{ {
memset(GFX.Screen + GFX.RealPPL * height, memset(GFX.Screen + GFX.RealPPL * height,
@ -849,7 +854,7 @@ bool8 S9xDeinitUpdate(int width, int height)
} }
} }
uint16_t *screen_view = GFX.Screen + yoffset * GFX.RealPPL; uint16_t *screen_view = GFX.Screen + (yoffset * (int)GFX.RealPPL);
if (!Settings.Paused && !NetPlay.Paused) if (!Settings.Paused && !NetPlay.Paused)
@ -895,16 +900,20 @@ static void S9xInitDriver()
#ifdef GDK_WINDOWING_WAYLAND #ifdef GDK_WINDOWING_WAYLAND
if (GDK_IS_WAYLAND_DISPLAY(gdk_display_get_default())) if (GDK_IS_WAYLAND_DISPLAY(gdk_display_get_default()))
{ {
gui_config->display_driver = "OpenGL"; gui_config->display_driver = "opengl";
} }
#endif #endif
if ("OpenGL" == gui_config->display_driver) if ("opengl" == gui_config->display_driver)
{ {
driver = new S9xOpenGLDisplayDriver(top_level, gui_config); driver = new S9xOpenGLDisplayDriver(top_level, gui_config);
} }
else if ("vulkan" == gui_config->display_driver)
{
driver = new S9xVulkanDisplayDriver(top_level, gui_config);
}
#if defined(USE_XV) && defined(GDK_WINDOWING_X11) #if defined(USE_XV) && defined(GDK_WINDOWING_X11)
else if ("Xv" == gui_config->display_driver) else if ("xv" == gui_config->display_driver)
{ {
driver = new S9xXVDisplayDriver(top_level, gui_config); driver = new S9xXVDisplayDriver(top_level, gui_config);
} }
@ -917,7 +926,7 @@ static void S9xInitDriver()
if (driver->init()) if (driver->init())
{ {
delete driver; delete driver;
gui_config->display_driver = "None"; gui_config->display_driver = "none";
driver->init(); driver->init();
} }

View File

@ -4,7 +4,6 @@
For further information, consult the LICENSE file in the root directory. For further information, consult the LICENSE file in the root directory.
\*****************************************************************************/ \*****************************************************************************/
#include "gtk_compat.h"
#include <cairo.h> #include <cairo.h>
#include "gtk_display.h" #include "gtk_display.h"
#include "gtk_display_driver_gtk.h" #include "gtk_display_driver_gtk.h"

View File

@ -4,7 +4,6 @@
For further information, consult the LICENSE file in the root directory. For further information, consult the LICENSE file in the root directory.
\*****************************************************************************/ \*****************************************************************************/
#include "gtk_compat.h"
#include <dlfcn.h> #include <dlfcn.h>
#include <sys/stat.h> #include <sys/stat.h>
#include <fcntl.h> #include <fcntl.h>
@ -13,7 +12,6 @@
#include "gtk_display.h" #include "gtk_display.h"
#include "gtk_display_driver_opengl.h" #include "gtk_display_driver_opengl.h"
#include "gtk_shader_parameters.h" #include "gtk_shader_parameters.h"
#include "shaders/shader_helpers.h"
static const GLchar *stock_vertex_shader_110 = static const GLchar *stock_vertex_shader_110 =
"#version 110\n" "#version 110\n"

View File

@ -0,0 +1,357 @@
/*****************************************************************************\
Snes9x - Portable Super Nintendo Entertainment System (TM) emulator.
This file is licensed under the Snes9x License.
For further information, consult the LICENSE file in the root directory.
\*****************************************************************************/
#include "gtk_compat.h"
#include "gtk_display.h"
#include "gtk_display_driver_vulkan.h"
#include "gtk_shader_parameters.h"
#include "../../vulkan/vulkan_context.hpp"
#include "../../vulkan/slang_shader.hpp"
#include "../../vulkan/slang_helpers.hpp"
#include "../../vulkan/vulkan_shader_chain.hpp"
#include "snes9x.h"
#include "gfx.h"
#include "fmt/format.h"
static const char *vertex_shader = R"(
#version 450
layout(location = 0) out vec2 texcoord;
vec2 positions[3] = vec2[](vec2(-1.0, -3.0), vec2(3.0, 1.0), vec2(-1.0, 1.0));
vec2 texcoords[3] = vec2[](vec2(0.0, -1.0), vec2(2.0, 1.0), vec2(0.0, 1.0));
void main()
{
gl_Position = vec4(positions[gl_VertexIndex], 0.0, 1.0);
texcoord = texcoords[gl_VertexIndex];
}
)";
static const char *fragment_shader = R"(
#version 450
layout(location = 0) in vec2 texcoord;
layout(binding = 0) uniform sampler2D tsampler;
layout(location = 0) out vec4 fragcolor;
void main()
{
fragcolor = texture(tsampler, texcoord);
}
)";
void S9xVulkanDisplayDriver::create_pipeline()
{
auto vertex_spirv = SlangShader::generate_spirv(vertex_shader, "vertex");
auto fragment_spirv = SlangShader::generate_spirv(fragment_shader, "fragment");
auto vertex_module = device.createShaderModuleUnique({ {}, vertex_spirv });
auto fragment_module = device.createShaderModuleUnique({ {}, fragment_spirv });
vk::PipelineShaderStageCreateInfo vertex_ci;
vertex_ci.setStage(vk::ShaderStageFlagBits::eVertex)
.setModule(vertex_module.get())
.setPName("main");
vk::PipelineShaderStageCreateInfo fragment_ci;
fragment_ci.setStage(vk::ShaderStageFlagBits::eFragment)
.setModule(fragment_module.get())
.setPName("main");
std::vector<vk::PipelineShaderStageCreateInfo> stages = { vertex_ci, fragment_ci };
vk::PipelineVertexInputStateCreateInfo vertex_input_info{};
vk::PipelineInputAssemblyStateCreateInfo pipeline_input_assembly_info{};
pipeline_input_assembly_info.setTopology(vk::PrimitiveTopology::eTriangleList)
.setPrimitiveRestartEnable(false);
std::vector<vk::Viewport> viewports(1);
viewports[0]
.setX(0.0f)
.setY(0.0f)
.setWidth(256)
.setHeight(256)
.setMinDepth(0.0f)
.setMaxDepth(1.0f);
std::vector<vk::Rect2D> scissors(1);
scissors[0].extent.width = 256;
scissors[0].extent.height = 256;
scissors[0].offset = vk::Offset2D(0, 0);
vk::PipelineViewportStateCreateInfo pipeline_viewport_info;
pipeline_viewport_info.setViewports(viewports)
.setScissors(scissors);
vk::PipelineRasterizationStateCreateInfo rasterizer_info;
rasterizer_info.setCullMode(vk::CullModeFlagBits::eBack)
.setFrontFace(vk::FrontFace::eClockwise)
.setLineWidth(1.0f)
.setDepthClampEnable(false)
.setRasterizerDiscardEnable(false)
.setPolygonMode(vk::PolygonMode::eFill)
.setDepthBiasEnable(false)
.setRasterizerDiscardEnable(false);
vk::PipelineMultisampleStateCreateInfo multisample_info;
multisample_info.setSampleShadingEnable(false)
.setRasterizationSamples(vk::SampleCountFlagBits::e1);
vk::PipelineDepthStencilStateCreateInfo depth_stencil_info;
depth_stencil_info.setDepthTestEnable(false);
vk::PipelineColorBlendAttachmentState blend_attachment_info;
blend_attachment_info
.setColorWriteMask(vk::ColorComponentFlagBits::eB |
vk::ColorComponentFlagBits::eG |
vk::ColorComponentFlagBits::eR |
vk::ColorComponentFlagBits::eA)
.setBlendEnable(true)
.setColorBlendOp(vk::BlendOp::eAdd)
.setSrcColorBlendFactor(vk::BlendFactor::eSrcAlpha)
.setDstColorBlendFactor(vk::BlendFactor::eOneMinusSrcAlpha)
.setAlphaBlendOp(vk::BlendOp::eAdd)
.setSrcAlphaBlendFactor(vk::BlendFactor::eOne)
.setSrcAlphaBlendFactor(vk::BlendFactor::eZero);
vk::PipelineColorBlendStateCreateInfo blend_state_info;
blend_state_info.setLogicOpEnable(false)
.setAttachments(blend_attachment_info);
std::vector<vk::DynamicState> states = { vk::DynamicState::eViewport, vk::DynamicState::eScissor };
vk::PipelineDynamicStateCreateInfo dynamic_state_info({}, states);
vk::DescriptorSetLayoutBinding dslb{};
dslb.setBinding(0)
.setStageFlags(vk::ShaderStageFlagBits::eFragment)
.setDescriptorCount(1)
.setDescriptorType(vk::DescriptorType::eCombinedImageSampler);
vk::DescriptorSetLayoutCreateInfo dslci{};
dslci.setBindings(dslb);
descriptor_set_layout = device.createDescriptorSetLayoutUnique(dslci);
vk::PipelineLayoutCreateInfo pipeline_layout_info;
pipeline_layout_info.setSetLayoutCount(0)
.setPushConstantRangeCount(0)
.setSetLayouts(descriptor_set_layout.get());
pipeline_layout = device.createPipelineLayoutUnique(pipeline_layout_info);
vk::GraphicsPipelineCreateInfo pipeline_create_info;
pipeline_create_info.setStageCount(2)
.setStages(stages)
.setPVertexInputState(&vertex_input_info)
.setPInputAssemblyState(&pipeline_input_assembly_info)
.setPViewportState(&pipeline_viewport_info)
.setPRasterizationState(&rasterizer_info)
.setPMultisampleState(&multisample_info)
.setPDepthStencilState(&depth_stencil_info)
.setPColorBlendState(&blend_state_info)
.setPDynamicState(&dynamic_state_info)
.setLayout(pipeline_layout.get())
.setRenderPass(swapchain->get_render_pass())
.setSubpass(0);
auto [result, pipeline] = device.createGraphicsPipelineUnique(nullptr, pipeline_create_info);
this->pipeline = std::move(pipeline);
}
S9xVulkanDisplayDriver::S9xVulkanDisplayDriver(Snes9xWindow *_window, Snes9xConfig *_config)
{
window = _window;
config = _config;
drawing_area = window->drawing_area;
gdk_window = nullptr;
gdk_display = nullptr;
context.reset();
}
S9xVulkanDisplayDriver::~S9xVulkanDisplayDriver()
{
}
void S9xVulkanDisplayDriver::refresh(int width, int height)
{
if (!context)
return;
bool vsync_changed = context->swapchain->set_vsync(gui_config->sync_to_vblank);
auto new_width = drawing_area->get_width() * drawing_area->get_scale_factor();
auto new_height = drawing_area->get_height() * drawing_area->get_scale_factor();
if (new_width != current_width || new_height != current_height || vsync_changed)
{
context->recreate_swapchain();
context->wait_idle();
current_width = new_width;
current_height = new_height;
}
context->swapchain->set_vsync(gui_config->sync_to_vblank);
}
int S9xVulkanDisplayDriver::init()
{
current_width = drawing_area->get_width() * drawing_area->get_scale_factor();
current_height = drawing_area->get_height() * drawing_area->get_scale_factor();
display = gdk_x11_display_get_xdisplay(drawing_area->get_display()->gobj());
xid = gdk_x11_window_get_xid(drawing_area->get_window()->gobj());
context = std::make_unique<Vulkan::Context>();
context->init(display, xid);
swapchain = context->swapchain.get();
device = context->device;
if (!gui_config->shader_filename.empty() && gui_config->use_shaders)
{
shaderchain = std::make_unique<Vulkan::ShaderChain>(context.get());
if (!shaderchain->load_shader_preset(gui_config->shader_filename))
{
fmt::print("Couldn't load shader preset file\n");
shaderchain = nullptr;
}
else
{
window->enable_widget("shader_parameters_item", true);
return 0;
}
}
create_pipeline();
descriptors.clear();
for (size_t i = 0; i < swapchain->get_num_frames(); i++)
{
vk::DescriptorSetAllocateInfo dsai{};
dsai
.setDescriptorPool(context->descriptor_pool.get())
.setDescriptorSetCount(1)
.setSetLayouts(descriptor_set_layout.get());
auto descriptor = device.allocateDescriptorSetsUnique(dsai);
descriptors.push_back(std::move(descriptor[0]));
}
textures.clear();
textures.resize(swapchain->get_num_frames());
for (auto &t : textures)
{
t.init(context.get());
t.create(256, 224, vk::Format::eR5G6B5UnormPack16, vk::SamplerAddressMode::eClampToEdge, Settings.BilinearFilter, false);
}
vk::SamplerCreateInfo sci{};
sci.setAddressModeU(vk::SamplerAddressMode::eClampToEdge)
.setAddressModeV(vk::SamplerAddressMode::eClampToEdge)
.setAddressModeW(vk::SamplerAddressMode::eClampToEdge)
.setMipmapMode(vk::SamplerMipmapMode::eLinear)
.setAnisotropyEnable(false)
.setMinFilter(vk::Filter::eLinear)
.setMagFilter(vk::Filter::eLinear)
.setUnnormalizedCoordinates(false)
.setMinLod(1.0f)
.setMaxLod(1.0f)
.setMipLodBias(0.0)
.setCompareEnable(false);
linear_sampler = device.createSampler(sci);
sci.setMinFilter(vk::Filter::eNearest)
.setMagFilter(vk::Filter::eNearest);
nearest_sampler = device.createSampler(sci);
return 0;
}
void S9xVulkanDisplayDriver::deinit()
{
if (!context)
return;
if (shaderchain)
gtk_shader_parameters_dialog_close();
context->wait_idle();
textures.clear();
descriptors.clear();
device.destroySampler(linear_sampler);
device.destroySampler(nearest_sampler);
}
void S9xVulkanDisplayDriver::update(uint16_t *buffer, int width, int height, int stride_in_pixels)
{
if (!context)
return;
auto viewport = S9xApplyAspect(width, height, current_width, current_height);
if (shaderchain)
{
shaderchain->do_frame((uint8_t *)buffer, width, height, stride_in_pixels << 1, vk::Format::eR5G6B5UnormPack16, viewport.x, viewport.y, viewport.w, viewport.h);
return;
}
if (!swapchain->begin_frame())
return;
auto &tex = textures[swapchain->get_current_frame()];
auto &cmd = swapchain->get_cmd();
auto extents = swapchain->get_extents();
auto &dstset = descriptors[swapchain->get_current_frame()].get();
tex.from_buffer(cmd, (uint8_t *)buffer, width, height, stride_in_pixels * 2);
swapchain->begin_render_pass();
vk::DescriptorImageInfo dii{};
dii.setImageView(tex.image_view)
.setSampler(Settings.BilinearFilter ? linear_sampler : nearest_sampler)
.setImageLayout(vk::ImageLayout::eShaderReadOnlyOptimal);
vk::WriteDescriptorSet wds{};
wds.setDescriptorCount(1)
.setDstBinding(0)
.setDstArrayElement(0)
.setDstSet(dstset)
.setDescriptorType(vk::DescriptorType::eCombinedImageSampler)
.setImageInfo(dii);
device.updateDescriptorSets(wds, {});
cmd.bindPipeline(vk::PipelineBindPoint::eGraphics, pipeline.get());
cmd.bindDescriptorSets(vk::PipelineBindPoint::eGraphics, pipeline_layout.get(), 0, dstset, {});
auto dest_rect = S9xApplyAspect(width, height, extents.width, extents.height);
cmd.setViewport(0, vk::Viewport(dest_rect.x, dest_rect.y, dest_rect.w, dest_rect.h, 0.0f, 1.0f));
cmd.setScissor(0, vk::Rect2D({}, extents));
cmd.draw(3, 1, 0, 0);
swapchain->end_render_pass();
swapchain->end_frame();
}
int S9xVulkanDisplayDriver::query_availability()
{
return 0;
}
void *S9xVulkanDisplayDriver::get_parameters()
{
if (shaderchain)
return &shaderchain->preset->parameters;
return nullptr;
}
void S9xVulkanDisplayDriver::save(const char *filename)
{
if (shaderchain)
shaderchain->preset->save_to_file(filename);
}
bool S9xVulkanDisplayDriver::is_ready()
{
return true;
}

View File

@ -0,0 +1,55 @@
/*****************************************************************************\
Snes9x - Portable Super Nintendo Entertainment System (TM) emulator.
This file is licensed under the Snes9x License.
For further information, consult the LICENSE file in the root directory.
\*****************************************************************************/
#pragma once
#include "gtk_s9x.h"
#include "gtk_display_driver.h"
#include "../../vulkan/vulkan_context.hpp"
#include "../../vulkan/vulkan_texture.hpp"
#include "../../vulkan/slang_preset.hpp"
#include "../../vulkan/vulkan_shader_chain.hpp"
class S9xVulkanDisplayDriver : public S9xDisplayDriver
{
public:
S9xVulkanDisplayDriver(Snes9xWindow *window, Snes9xConfig *config);
~S9xVulkanDisplayDriver();
void refresh(int width, int height) override;
int init() override;
void deinit() override;
void update(uint16_t *buffer, int width, int height, int stride_in_pixels) override;
void *get_parameters() override;
void save(const char *filename) override;
bool is_ready() override;
static int query_availability();
private:
std::unique_ptr<Vulkan::Context> context;
Vulkan::Swapchain *swapchain;
vk::Device device;
GdkDisplay *gdk_display;
GdkWindow *gdk_window;
Display *display;
Window xid;
Colormap colormap;
int current_width;
int current_height;
void create_pipeline();
vk::UniqueDescriptorSetLayout descriptor_set_layout;
vk::UniquePipelineLayout pipeline_layout;
vk::UniquePipeline pipeline;
vk::Sampler linear_sampler;
vk::Sampler nearest_sampler;
void draw_buffer(uint8_t *buffer, int width, int height, int byte_stride);
bool filter = true;
std::vector<Vulkan::Texture> textures;
std::vector<vk::UniqueDescriptorSet> descriptors;
std::unique_ptr<Vulkan::ShaderChain> shaderchain;
};

View File

@ -141,10 +141,12 @@ Snes9xPreferences::Snes9xPreferences(Snes9xConfig *config)
for (const auto &driver : config->display_drivers) for (const auto &driver : config->display_drivers)
{ {
std::string entry; std::string entry;
if (!strcasecmp(driver.c_str(), "opengl")) if (driver == "opengl")
entry = _("OpenGL - Use 3D graphics hardware"); entry = _("OpenGL - Use 3D graphics hardware");
else if (!strcasecmp(driver.c_str(), "Xv")) else if (driver == "xv")
entry = _("XVideo - Use hardware video blitter"); entry = _("XVideo - Use hardware video blitter");
else if (driver == "vulkan")
entry = _("Vulkan");
else else
entry = _("None - Use software scaler"); entry = _("None - Use software scaler");
@ -176,9 +178,10 @@ void Snes9xPreferences::connect_signals()
get_object<Gtk::ComboBox>("hw_accel")->signal_changed().connect([&] { get_object<Gtk::ComboBox>("hw_accel")->signal_changed().connect([&] {
int id = get_combo("hw_accel"); int id = get_combo("hw_accel");
show_widget("bilinear_filter", config->display_drivers[id] != "Xv"); show_widget("bilinear_filter", config->display_drivers[id] != "xv");
show_widget("opengl_frame", config->display_drivers[id] == "OpenGL"); show_widget("opengl_frame", config->display_drivers[id] == "opengl" ||
show_widget("xv_frame", config->display_drivers[id] == "Xv"); config->display_drivers[id] == "vulkan");
show_widget("xv_frame", config->display_drivers[id] == "xv");
}); });
get_object<Gtk::Button>("reset_current_joypad")->signal_pressed().connect(sigc::mem_fun(*this, &Snes9xPreferences::reset_current_joypad)); get_object<Gtk::Button>("reset_current_joypad")->signal_pressed().connect(sigc::mem_fun(*this, &Snes9xPreferences::reset_current_joypad));

View File

@ -6,8 +6,6 @@
#include <stdio.h> #include <stdio.h>
#include <signal.h> #include <signal.h>
#include "giomm/application.h"
#include "glibmm/main.h"
#include "gtk_compat.h" #include "gtk_compat.h"
#include "gtk_config.h" #include "gtk_config.h"
#include "gtk_s9x.h" #include "gtk_s9x.h"
@ -84,6 +82,10 @@ int main(int argc, char *argv[])
exit(3); exit(3);
top_level = new Snes9xWindow(gui_config); top_level = new Snes9xWindow(gui_config);
#ifdef GDK_WINDOWING_X11
if (!GDK_IS_X11_WINDOW(top_level->window->get_window()->gobj()))
XInitThreads();
#endif
// Setting fullscreen before showing the window avoids some flicker. // Setting fullscreen before showing the window avoids some flicker.
if ((gui_config->full_screen_on_open && rom_filename) || (gui_config->fullscreen)) if ((gui_config->full_screen_on_open && rom_filename) || (gui_config->fullscreen))

View File

@ -515,6 +515,7 @@ void Snes9xWindow::setup_splash()
return; return;
} }
return;
for (int y = 0; y < 224; y++, screen_ptr += (GFX.Pitch / 2)) { for (int y = 0; y < 224; y++, screen_ptr += (GFX.Pitch / 2)) {
memset(screen_ptr, 0, 256 * sizeof(uint16)); memset(screen_ptr, 0, 256 * sizeof(uint16));

View File

@ -5,6 +5,7 @@
\*****************************************************************************/ \*****************************************************************************/
#include "gtk_sound_driver_sdl.h" #include "gtk_sound_driver_sdl.h"
#include "SDL_audio.h"
#include "gtk_s9x.h" #include "gtk_s9x.h"
#include "apu/apu.h" #include "apu/apu.h"
#include "snes9x.h" #include "snes9x.h"
@ -85,6 +86,12 @@ bool S9xSDLSoundDriver::open_device()
audiospec.userdata = this; audiospec.userdata = this;
char *name;
SDL_AudioSpec spec;
SDL_GetDefaultAudioInfo(&name, &spec, 0);
printf("%s\n", name);
SDL_free(name);
printf("SDL sound driver initializing...\n"); printf("SDL sound driver initializing...\n");
printf(" --> (Frequency: %dhz, Latency: %dms)...", printf(" --> (Frequency: %dhz, Latency: %dms)...",
audiospec.freq, audiospec.freq,

View File

@ -3994,7 +3994,7 @@ void CMemory::CheckForAnyPatch (const char *rom_filename, bool8 header, int32 &r
#endif #endif
// BPS // BPS
std::string filename = patch_path + S9xGetFilename(".bps", PATCH_DIR); std::string filename = S9xGetFilename(".bps", PATCH_DIR);
if ((patch_file = OPEN_FSTREAM(filename.c_str(), "rb")) != NULL) if ((patch_file = OPEN_FSTREAM(filename.c_str(), "rb")) != NULL)
{ {
@ -4014,7 +4014,7 @@ void CMemory::CheckForAnyPatch (const char *rom_filename, bool8 header, int32 &r
} }
// UPS // UPS
filename = patch_path + S9xGetFilename(".ups", PATCH_DIR); filename = S9xGetFilename(".ups", PATCH_DIR);
if ((patch_file = OPEN_FSTREAM(filename.c_str(), "rb")) != NULL) if ((patch_file = OPEN_FSTREAM(filename.c_str(), "rb")) != NULL)
{ {
@ -4034,7 +4034,7 @@ void CMemory::CheckForAnyPatch (const char *rom_filename, bool8 header, int32 &r
} }
// IPS // IPS
filename = patch_path + S9xGetFilename(".ips", PATCH_DIR); filename = S9xGetFilename(".ips", PATCH_DIR);
if ((patch_file = OPEN_FSTREAM(filename.c_str(), "rb")) != NULL) if ((patch_file = OPEN_FSTREAM(filename.c_str(), "rb")) != NULL)
{ {

View File

@ -340,10 +340,14 @@ void GLSLShader::read_shader_file_with_includes(std::string filename,
} }
else if (line.compare(0, 17, "#pragma parameter") == 0) else if (line.compare(0, 17, "#pragma parameter") == 0)
{ {
char id[PATH_MAX];
char name[PATH_MAX];
GLSLParam par; GLSLParam par;
sscanf(line.c_str(), "#pragma parameter %s \"%[^\"]\" %f %f %f %f", sscanf(line.c_str(), "#pragma parameter %s \"%[^\"]\" %f %f %f %f",
par.id, par.name, &par.val, &par.min, &par.max, &par.step); id, name, &par.val, &par.min, &par.max, &par.step);
par.id = id;
par.name = name;
unsigned int last_decimal = line.rfind(".") + 1; unsigned int last_decimal = line.rfind(".") + 1;
unsigned int index = last_decimal; unsigned int index = last_decimal;
@ -359,7 +363,7 @@ void GLSLShader::read_shader_file_with_includes(std::string filename,
unsigned int i = 0; unsigned int i = 0;
for (; i < param.size(); i++) for (; i < param.size(); i++)
{ {
if (!strcmp(param[i].id, par.id)) if (param[i].id == par.id)
break; break;
} }
if (i >= param.size()) if (i >= param.size())
@ -650,7 +654,7 @@ bool GLSLShader::load_shader(const char *filename)
{ {
char key[266]; char key[266];
const char *value; const char *value;
snprintf (key, 266, "::%s", param[i].id); snprintf (key, 266, "::%s", param[i].id.c_str());
value = conf.GetString (key, NULL); value = conf.GetString (key, NULL);
if (value) if (value)
{ {
@ -947,6 +951,7 @@ void GLSLShader::register_uniforms()
max_prev_frame = 0; max_prev_frame = 0;
char varname[100]; char varname[100];
unif.resize(pass.size());
for (unsigned int i = 1; i < pass.size(); i++) for (unsigned int i = 1; i < pass.size(); i++)
{ {
GLSLUniforms *u = &pass[i].unif; GLSLUniforms *u = &pass[i].unif;
@ -1027,9 +1032,10 @@ void GLSLShader::register_uniforms()
u->Lut[j] = glGetUniformLocation(program, lut[j].id); u->Lut[j] = glGetUniformLocation(program, lut[j].id);
} }
for (unsigned int j = 0; j < param.size(); j++) unif[i].resize(param.size());
for (unsigned int param_num = 0; param_num < param.size(); param_num++)
{ {
param[j].unif[i] = glGetUniformLocation(program, param[j].id); unif[i][param_num] = glGetUniformLocation(program, param[param_num].id.c_str());
} }
} }
@ -1160,7 +1166,7 @@ void GLSLShader::set_shader_vars(unsigned int p, bool inverted)
// User and Preset Parameters // User and Preset Parameters
for (unsigned int i = 0; i < param.size(); i++) for (unsigned int i = 0; i < param.size(); i++)
{ {
setUniform1f(param[i].unif[p], param[i].val); setUniform1f(unif[p][i], param[i].val);
} }
glActiveTexture(GL_TEXTURE0); glActiveTexture(GL_TEXTURE0);
@ -1230,14 +1236,14 @@ void GLSLShader::save(const char *filename)
fprintf(file, "parameters = \""); fprintf(file, "parameters = \"");
for (unsigned int i = 0; i < param.size(); i++) for (unsigned int i = 0; i < param.size(); i++)
{ {
fprintf(file, "%s%c", param[i].id, (i == param.size() - 1) ? '\"' : ';'); fprintf(file, "%s%c", param[i].id.c_str(), (i == param.size() - 1) ? '\"' : ';');
} }
fprintf(file, "\n"); fprintf(file, "\n");
} }
for (unsigned int i = 0; i < param.size(); i++) for (unsigned int i = 0; i < param.size(); i++)
{ {
fprintf(file, "%s = \"%f\"\n", param[i].id, param[i].val); fprintf(file, "%s = \"%f\"\n", param[i].id.c_str(), param[i].val);
} }
if (lut.size() > 0) if (lut.size() > 0)

View File

@ -145,14 +145,13 @@ struct GLSLLut
struct GLSLParam struct GLSLParam
{ {
char name[PATH_MAX]; std::string name;
char id[256]; std::string id;
float min; float min;
float max; float max;
float val; float val;
float step; float step;
int digits; int digits;
GLint unif[glsl_max_passes];
}; };
struct GLSLShader struct GLSLShader
@ -178,6 +177,8 @@ struct GLSLShader
std::vector<GLSLPass> pass; std::vector<GLSLPass> pass;
std::vector<GLSLLut> lut; std::vector<GLSLLut> lut;
std::vector<GLSLParam> param; std::vector<GLSLParam> param;
std::vector<std::vector<GLint>> unif;
int max_prev_frame; int max_prev_frame;
std::deque<GLSLPass> prev_frame; std::deque<GLSLPass> prev_frame;
std::vector<GLuint> vaos; std::vector<GLuint> vaos;

128
vulkan/slang_helpers.cpp Normal file
View File

@ -0,0 +1,128 @@
#include "slang_helpers.hpp"
#include <string>
#include <vector>
#include <filesystem>
#include <cmath>
using std::string;
using std::string_view;
using std::vector;
namespace fs = std::filesystem;
int mipmap_levels_for_size(int width, int height)
{
return (int)log2(width > height ? width : height);
}
void trim(string_view &view)
{
while (view.length() > 0 && isspace(view.at(0)))
view.remove_prefix(1);
while (view.length() > 0 && isspace(view.at(view.length() - 1)))
view.remove_suffix(1);
}
string trim(const string& str)
{
string_view sv(str);
trim(sv);
return string(sv);
}
int get_significant_digits(const string_view &view)
{
auto pos = view.rfind('.');
if (pos == string_view::npos)
return 0;
return view.size() - pos - 1;
}
vector<string> split_string_quotes(const string_view &view)
{
vector<string> tokens;
size_t pos = 0;
while (pos < view.length())
{
size_t indexa = view.find_first_not_of("\t\r\n ", pos);
size_t indexb = 0;
if (indexa == string::npos)
break;
if (view.at(indexa) == '\"')
{
indexa++;
indexb = view.find_first_of('\"', indexa);
if (indexb == string::npos)
break;
}
else
{
indexb = view.find_first_of("\t\r\n ", indexa);
if (indexb == string::npos)
indexb = view.size();
}
if (indexb > indexa)
tokens.push_back(string{view.substr(indexa, indexb - indexa)});
pos = indexb + 1;
}
return tokens;
}
vector<string> split_string(const string_view &str, unsigned char delim)
{
vector<string> tokens;
size_t pos = 0;
size_t index;
while (pos < str.length())
{
index = str.find(delim, pos);
if (index == string::npos)
{
if (pos < str.length())
{
tokens.push_back(string{str.substr(pos)});
}
break;
}
else if (index > pos)
{
tokens.push_back(string{str.substr(pos, index - pos)});
}
pos = index + 1;
}
return tokens;
}
bool ends_with(const string &str, const string &ext)
{
if (ext.size() > str.size())
return false;
auto icmp = [](const unsigned char a, const unsigned char b) -> bool {
return std::tolower(a) == std::tolower(b);
};
return std::equal(ext.crbegin(), ext.crend(), str.crbegin(), icmp);
}
void canonicalize(string &filename, const string &base)
{
fs::path path(filename);
if (path.is_relative())
{
fs::path base_path(base);
base_path.remove_filename();
path = fs::weakly_canonical(base_path / path);
filename = path.string();
}
}

13
vulkan/slang_helpers.hpp Normal file
View File

@ -0,0 +1,13 @@
#pragma once
#include <string>
#include <vector>
#include <string_view>
int get_significant_digits(const std::string_view &view);
std::string trim(const std::string &str);
void trim(std::string_view &view);
std::vector<std::string> split_string(const std::string_view &str, unsigned char delim);
std::vector<std::string> split_string_quotes(const std::string_view &view);
bool ends_with(const std::string &str, const std::string &ext);
void canonicalize(std::string &filename, const std::string &base);
int mipmap_levels_for_size(int width, int height);

842
vulkan/slang_preset.cpp Normal file
View File

@ -0,0 +1,842 @@
#include "slang_preset.hpp"
#include "../external/SPIRV-Cross/spirv.hpp"
#include "slang_helpers.hpp"
#include "slang_preset_ini.hpp"
#include <string>
#include <cstdio>
#include <vector>
#include <cctype>
#include <iostream>
#include <fstream>
#include <filesystem>
#include <unordered_map>
#include "../external/SPIRV-Cross/spirv_cross.hpp"
#include "../external/SPIRV-Cross/spirv_glsl.hpp"
#include "slang_shader.hpp"
#include "../../conffile.h"
using std::string;
using std::to_string;
SlangPreset::SlangPreset()
{
}
SlangPreset::~SlangPreset()
{
}
#if 1
bool SlangPreset::load_preset_file(string filename)
{
if (!ends_with(filename, ".slangp"))
return false;
IniFile conf;
if (!conf.load_file(filename))
return false;
int num_passes = conf.get_int("shaders", 0);
if (num_passes <= 0)
return false;
passes.resize(num_passes);
// ConfFile.cpp searches global keys using a "::" prefix. .slang shaders
// indicate data specific to shader pass by appending the pass number.
int index;
auto key = [&](string s) -> string { return s + to_string(index); };
auto iGetBool = [&](string s, bool def = false) -> bool {
return conf.get_bool(key(s), def);
};
auto iGetString = [&](string s, string def = "") -> string {
return conf.get_string(key(s), def);
};
auto iGetFloat = [&](string s, float def = 1.0f) -> float {
return conf.get_float(key(s), def);
};
auto iGetInt = [&](string s, int def = 0) -> int {
return conf.get_int(key(s), def);
};
for (index = 0; index < num_passes; index++)
{
auto &shader = passes[index];
shader.filename = iGetString("shader", "");
canonicalize(shader.filename, conf.get_source(key("shader")));
shader.alias = iGetString("alias", "");
shader.filter_linear = iGetBool("filter_linear");
shader.mipmap_input = iGetBool("mipmap_input");
shader.float_framebuffer = iGetBool("float_framebuffer");
shader.srgb_framebuffer = iGetBool("srgb_framebuffer");
shader.frame_count_mod = iGetInt("frame_count_mod", 0);
shader.wrap_mode = iGetString("wrap_mode");
// Is this correct? It gives priority to _x and _y scale types.
string scale_type = iGetString("scale_type", "undefined");
shader.scale_type_x = iGetString("scale_type_x", scale_type);
shader.scale_type_y = iGetString("scale_type_y", scale_type);
shader.scale_x = iGetFloat("scale_x", 1.0f);
shader.scale_y = iGetFloat("scale_y", 1.0f);
if (conf.exists(key("scale")))
{
float scale = iGetFloat("scale");
shader.scale_x = scale;
shader.scale_y = scale;
}
}
string texture_string = conf.get_string("textures", "");
if (!texture_string.empty())
{
auto texture_list = split_string(texture_string, ';');
for (auto &id : texture_list)
{
Texture texture;
texture.id = trim(id);
textures.push_back(texture);
}
for (auto &t : textures)
{
t.wrap_mode = conf.get_string(t.id + "_wrap_mode", "");
t.mipmap = conf.get_bool(t.id + "_mipmap", false);
t.linear = conf.get_bool(t.id + "_linear", false);
t.filename = conf.get_string(t.id, "");
canonicalize(t.filename, conf.get_source(t.id));
}
}
for (auto &shader : passes)
{
if (!shader.load_file())
return false;
}
for (auto &texture : textures)
canonicalize(texture.filename, conf.textures_filename);
gather_parameters();
for (auto &p : parameters)
{
auto value_str = conf.get_string(p.id, "");
if (!value_str.empty())
{
p.val = atof(value_str.c_str());
if (p.val < p.min)
p.val = p.min;
else if (p.val > p.max)
p.val = p.max;
}
}
return true;
}
#else
bool SlangPreset::load_preset_file(string filename)
{
if (!ends_with(filename, ".slangp"))
return false;
ConfigFile conf;
if (!conf.LoadFile(filename.c_str()))
return false;
int num_passes = conf.GetInt("::shaders", 0);
if (num_passes <= 0)
return false;
passes.resize(num_passes);
// ConfFile.cpp searches global keys using a "::" prefix. .slang shaders
// indicate data specific to shader pass by appending the pass number.
int index;
auto key = [&](string s) -> string { return "::" + s + to_string(index); };
auto iGetBool = [&](string s, bool def = false) -> bool {
return conf.GetBool(key(s).c_str(), def);
};
auto iGetString = [&](string s, string def = "") -> string {
return conf.GetString(key(s).c_str(), def);
};
auto iGetFloat = [&](string s, float def = 1.0f) -> float {
return std::stof(conf.GetString(key(s).c_str(), to_string(def).c_str()));
};
auto iGetInt = [&](string s, int def = 0) -> int {
return conf.GetInt(key(s).c_str(), def);
};
for (index = 0; index < num_passes; index++)
{
auto &shader = passes[index];
shader.filename = iGetString("shader", "");
shader.alias = iGetString("alias", "");
shader.filter_linear = iGetBool("filter_linear");
shader.mipmap_input = iGetBool("mipmap_input");
shader.float_framebuffer = iGetBool("float_framebuffer");
shader.srgb_framebuffer = iGetBool("srgb_framebuffer");
shader.frame_count_mod = iGetInt("frame_count_mod", 0);
shader.wrap_mode = iGetString("wrap_mode");
// Is this correct? It gives priority to _x and _y scale types.
string scale_type = iGetString("scale_type", "undefined");
shader.scale_type_x = iGetString("scale_type_x", scale_type);
shader.scale_type_y = iGetString("scale_type_y", scale_type);
shader.scale_x = iGetFloat("scale_x", 1.0f);
shader.scale_y = iGetFloat("scale_y", 1.0f);
if (conf.Exists(key("scale").c_str()))
{
float scale = iGetFloat("scale");
shader.scale_x = scale;
shader.scale_y = scale;
}
}
string texture_string = conf.GetString("::textures", "");
if (!texture_string.empty())
{
auto texture_list = split_string(texture_string, ';');
for (auto &id : texture_list)
{
Texture texture;
texture.id = trim(id);
textures.push_back(texture);
}
for (auto &t : textures)
{
t.wrap_mode = conf.GetString(("::" + t.id + "_wrap_mode").c_str(), "");
t.mipmap = conf.GetBool(("::" + t.id + "_mipmap").c_str(), "");
t.linear = conf.GetBool(("::" + t.id + "_linear").c_str(), "");
t.filename = conf.GetString(("::" + t.id).c_str(), "");
}
}
for (auto &shader : passes)
{
canonicalize(shader.filename, filename);
if (!shader.load_file())
return false;
}
for (auto &texture : textures)
canonicalize(texture.filename, filename);
gather_parameters();
for (auto &p : parameters)
{
auto value_str = conf.GetString(("::" + p.id).c_str());
if (value_str)
{
p.val = atof(value_str);
if (p.val < p.min)
p.val = p.min;
else if (p.val > p.max)
p.val = p.max;
}
}
return true;
}
#endif
/*
Aggregates the parameters from individual stages and separate shader files,
resolving duplicates.
*/
void SlangPreset::gather_parameters()
{
std::unordered_map<std::string, SlangShader::Parameter> map;
for (auto &s : passes)
{
for (auto &p : s.parameters)
{
map.insert({ p.id, p });
}
}
parameters.clear();
for (auto &p : map)
parameters.push_back(p.second);
}
/*
Print to stdout the entire shader preset chain and parameters, minus source
and SPIRV output
*/
void SlangPreset::print()
{
printf("Number of Shaders: %zu\n", passes.size());
for (size_t i = 0; i < passes.size(); i++)
{
auto &s = passes[i];
printf(" Shader \n");
printf(" filename: %s\n", s.filename.c_str());
printf(" alias: %s\n", s.alias.c_str());
printf(" filter_linear: %d\n", s.filter_linear);;
printf(" mipmap_input: %d\n", s.mipmap_input);
printf(" float_framebuffer: %d\n", s.float_framebuffer);
printf(" srgb_framebuffer: %d\n", s.srgb_framebuffer);
printf(" frame_count_mod: %d\n", s.frame_count_mod);
printf(" wrap_mode: %s\n", s.wrap_mode.c_str());
printf(" scale_type_x: %s\n", s.scale_type_x.c_str());
printf(" scale_type_y: %s\n", s.scale_type_y.c_str());
printf(" scale_x: %f\n", s.scale_x);
printf(" scale_y: %f\n", s.scale_y);
printf(" pragma lines: ");
for (auto &p : s.pragma_stage_lines)
printf("%zu ", p);
printf("\n");
printf(" Number of parameters: %zu\n", s.parameters.size());
for (auto &p : s.parameters)
{
printf(" %s \"%s\" min: %f max: %f val: %f step: %f digits: %d\n",
p.id.c_str(), p.name.c_str(), p.min, p.max, p.val, p.step, p.significant_digits);
}
printf(" Uniforms: %zu\n", s.uniforms.size());
printf(" UBO size: %zu, binding: %d\n", s.ubo_size, s.ubo_binding);
printf(" Push Constant block size: %zu\n", s.push_constant_block_size);
for (auto &u : s.uniforms)
{
const char *strings[] = {
"Output Size",
"Previous Frame Size",
"Pass Size",
"Pass Feedback Size",
"Lut Size",
"MVP Matrix",
"Frame Count",
"Parameter"
};
const char *block = u.block == SlangShader::Uniform::UBO ? "UBO" : "Push Constants";
const char *type = strings[u.type];
printf(" ");
switch (u.type)
{
case SlangShader::Uniform::Type::ViewportSize:
printf("%s at offset %zu in %s\n", type, u.offset, block);
break;
case SlangShader::Uniform::Type::PreviousFrameSize:
case SlangShader::Uniform::Type::PassSize:
case SlangShader::Uniform::Type::PassFeedbackSize:
case SlangShader::Uniform::Type::LutSize:
printf("%s %d at offset %zu in %s\n", type, u.specifier, u.offset, block);
break;
case SlangShader::Uniform::Type::MVP:
case SlangShader::Uniform::Type::FrameCount:
printf("%s at offset %zu in %s\n", type, u.offset, block);
break;
case SlangShader::Uniform::Type::Parameter:
printf("%s #%d named \"%s\" at offset %zu in %s\n", type, u.specifier, parameters[u.specifier].id.c_str(), u.offset, block);
default:
break;
}
}
printf(" Samplers: %zu\n", s.samplers.size());
for (auto &sampler : s.samplers)
{
const char *strings[] =
{
"Previous Frame",
"Pass",
"Pass Feedback",
"Lut",
};
const char *type = strings[sampler.type];
printf(" ");
switch (sampler.type)
{
case SlangShader::Sampler::Type::PreviousFrame:
case SlangShader::Sampler::Type::Pass:
case SlangShader::Sampler::Type::PassFeedback:
printf("%s %d at binding %d\n", type, sampler.specifier, sampler.binding);
break;
case SlangShader::Sampler::Type::Lut:
printf("%s %d \"%s\" at binding %d\n", type, sampler.specifier, textures[sampler.specifier].id.c_str(), sampler.binding);
break;
default:
break;
}
}
}
printf("Num textures: %zu\n", textures.size());
for (size_t i = 0; i < textures.size(); i++)
{
auto &t = textures[i];
printf(" Texture %zu\n", i);
printf(" id: %s\n", t.id.c_str());
printf(" filename: %s\n", t.filename.c_str());
printf(" wrap_mode: %s\n", t.wrap_mode.c_str());
printf(" mipmap: %d\n", t.mipmap);
printf(" linear: %d\n", t.linear);
}
printf("Parameters: %zu count\n", parameters.size());
for (size_t i = 0; i < parameters.size(); i++)
{
auto &p = parameters[i];
printf(" Parameter: %zu\n", i);
printf(" id: %s\n", p.id.c_str());
printf(" name: %s\n", p.name.c_str());
printf(" min: %f\n", p.min);
printf(" max: %f\n", p.max);
printf(" step: %f\n", p.step);
printf(" sigdigits: %d\n", p.significant_digits);
printf(" value: %f\n", p.val);
}
printf("Oldest previous frame used: %d\n", oldest_previous_frame);
}
bool SlangPreset::match_sampler_semantic(const string &name, int pass, SlangShader::Sampler::Type &type, int &specifier)
{
auto match_with_specifier = [&name, &specifier](string prefix) -> bool {
if (name.compare(0, prefix.length(), prefix) != 0)
return false;
if (name.length() <= prefix.length())
return false;
for (auto iter = name.begin() + prefix.length(); iter < name.end(); iter++)
{
if (!std::isdigit(*iter))
return false;
}
specifier = std::stoi(name.substr(prefix.length()));
return true;
};
if (name == "Original")
{
type = SlangShader::Sampler::Type::Pass;
specifier = -1;
return true;
}
else if (name == "Source")
{
type = SlangShader::Sampler::Type::Pass;
specifier = pass - 1;
return true;
}
else if (match_with_specifier("OriginalHistory"))
{
type = SlangShader::Sampler::Type::PreviousFrame;
return true;
}
else if (match_with_specifier("PassOutput"))
{
type = SlangShader::Sampler::Type::Pass;
return true;
}
else if (match_with_specifier("PassFeedback"))
{
type = SlangShader::Sampler::Type::PassFeedback;
return true;
}
else if (match_with_specifier("User"))
{
type = SlangShader::Sampler::Type::Lut;
return true;
}
else
{
for (size_t i = 0; i < passes.size(); i++)
{
if (passes[i].alias == name)
{
type = SlangShader::Sampler::Type::Pass;
specifier = i;
return true;
}
else if (passes[i].alias + "Feedback" == name)
{
type = SlangShader::Sampler::Type::PassFeedback;
specifier = i;
return true;
}
}
}
for (size_t i = 0; i < textures.size(); i++)
{
if (name == textures[i].id)
{
type = SlangShader::Sampler::Type::Lut;
specifier = i;
return true;
}
}
return false;
}
bool SlangPreset::match_buffer_semantic(const string &name, int pass, SlangShader::Uniform::Type &type, int &specifier)
{
if (name == "MVP")
{
type = SlangShader::Uniform::Type::MVP;
return true;
}
if (name == "FrameCount")
{
type = SlangShader::Uniform::Type::FrameCount;
return true;
}
if (name == "FinalViewportSize")
{
type = SlangShader::Uniform::Type::ViewportSize;
return true;
}
if (name == "FrameDirection")
{
type = SlangShader::Uniform::Type::FrameDirection;
return true;
}
if (name.find("Size") != string::npos)
{
auto match = [&name, &specifier](string prefix) -> bool {
if (name.compare(0, prefix.length(), prefix) != 0)
return false;
if (name.compare(prefix.length(), 4, "Size") != 0)
return false;
if (prefix.length() + 4 < name.length())
specifier = std::stoi(name.substr(prefix.length() + 4));
return true;
};
if (match("Original"))
{
type = SlangShader::Uniform::Type::PassSize;
specifier = -1;
return true;
}
else if (match("Source"))
{
type = SlangShader::Uniform::Type::PassSize;
specifier = pass - 1;
return true;
}
else if (match("Output"))
{
type = SlangShader::Uniform::Type::PassSize;
specifier = pass;
return true;
}
else if (match("OriginalHistory"))
{
type = SlangShader::Uniform::Type::PreviousFrameSize;
return true;
}
else if (match("PassOutput"))
{
type = SlangShader::Uniform::Type::PassSize;
return true;
}
else if (match("PassFeedback"))
{
type = SlangShader::Uniform::Type::PassFeedbackSize;
return true;
}
else if (match("User"))
{
type = SlangShader::Uniform::Type::LutSize;
return true;
}
for (size_t i = 0; i < passes.size(); i++)
{
if (match(passes[i].alias))
{
type = SlangShader::Uniform::Type::PassSize;
specifier = i;
return true;
}
}
for (size_t i = 0; i < textures.size(); i++)
{
if (match(textures[i].id))
{
type = SlangShader::Uniform::Type::LutSize;
return true;
}
}
}
for (size_t i = 0; i < parameters.size(); i++)
{
if (name == parameters[i].id)
{
type = SlangShader::Uniform::Type::Parameter;
specifier = i;
return true;
}
}
return false;
}
/*
Introspect an individual shader pass, collecting external resource info
in order to build uniform blocks.
*/
bool SlangPreset::introspect_shader(SlangShader &shader, int pass, SlangShader::Stage stage)
{
spirv_cross::CompilerGLSL cross(stage == SlangShader::Stage::Vertex ? shader.vertex_shader_spirv : shader.fragment_shader_spirv);
auto res = cross.get_shader_resources();
if (res.push_constant_buffers.size() > 1)
{
printf("%s: Too many push constant buffers.\n", shader.filename.c_str());
return false;
}
else if (res.uniform_buffers.size() > 1)
{
printf("%s: Too many uniform buffers.\n", shader.filename.c_str());
return false;
}
auto exists = [&shader](const SlangShader::Uniform &uniform) -> bool {
for (const auto &u : shader.uniforms)
{
if (u.block == uniform.block &&
u.offset == uniform.offset &&
u.specifier == uniform.specifier &&
u.type == uniform.type)
{
return true;
}
}
return false;
};
if (res.push_constant_buffers.size() == 0)
{
shader.push_constant_block_size = 0;
}
else
{
auto &pcb = res.push_constant_buffers[0];
auto &pcb_type = cross.get_type(pcb.base_type_id);
shader.push_constant_block_size = cross.get_declared_struct_size(pcb_type);
for (size_t i = 0; i < pcb_type.member_types.size(); i++)
{
auto name = cross.get_member_name(pcb.base_type_id, i);
auto offset = cross.get_member_decoration(pcb.base_type_id, i, spv::DecorationOffset);
SlangShader::Uniform::Type semantic_type;
int specifier;
if (match_buffer_semantic(name, pass, semantic_type, specifier))
{
SlangShader::Uniform uniform{ SlangShader::Uniform::Block::PushConstant,
offset,
semantic_type,
specifier };
if (!exists(uniform))
shader.uniforms.push_back(uniform);
}
else
{
printf("%s: Failed to match push constant semantic: \"%s\"\n", shader.filename.c_str(), name.c_str());
}
}
}
if (res.uniform_buffers.size() == 0)
{
shader.ubo_size = 0;
}
else
{
auto &ubo = res.uniform_buffers[0];
auto &ubo_type = cross.get_type(ubo.base_type_id);
shader.ubo_size = cross.get_declared_struct_size(ubo_type);
shader.ubo_binding = cross.get_decoration(ubo.base_type_id, spv::DecorationBinding);
for (size_t i = 0; i < ubo_type.member_types.size(); i++)
{
auto name = cross.get_member_name(ubo.base_type_id, i);
auto offset = cross.get_member_decoration(ubo.base_type_id, i, spv::DecorationOffset);
SlangShader::Uniform::Type semantic_type;
int specifier;
if (match_buffer_semantic(name, pass, semantic_type, specifier))
{
SlangShader::Uniform uniform{ SlangShader::Uniform::Block::UBO,
offset,
semantic_type,
specifier };
if (!exists(uniform))
shader.uniforms.push_back(uniform);
}
else
{
printf("%s: Failed to match uniform buffer semantic: \"%s\"\n", shader.filename.c_str(), name.c_str());
}
}
}
if (res.sampled_images.size() == 0 && stage == SlangShader::Stage::Fragment)
{
printf("No sampled images found in fragment shader.\n");
return false;
}
if (res.sampled_images.size() > 0 && stage == SlangShader::Stage::Vertex)
{
printf("Sampled image found in vertex shader.\n");
return false;
}
if (stage == SlangShader::Stage::Fragment)
{
for (auto &image : res.sampled_images)
{
SlangShader::Sampler::Type semantic_type;
int specifier;
if (match_sampler_semantic(image.name, pass, semantic_type, specifier))
{
int binding = cross.get_decoration(image.id, spv::DecorationBinding);
shader.samplers.push_back({ semantic_type, specifier, binding });
}
else
{
printf("%s: Failed to match sampler semantic: \"%s\"\n", shader.filename.c_str(), image.name.c_str());
return false;
}
}
}
return true;
}
/*
Introspect all of preset's shaders.
*/
bool SlangPreset::introspect()
{
for (size_t i = 0; i < passes.size(); i++)
{
if (!introspect_shader(passes[i], i, SlangShader::Stage::Vertex))
return false;
if (!introspect_shader(passes[i], i, SlangShader::Stage::Fragment))
return false;
}
oldest_previous_frame = 0;
uses_feedback = false;
last_pass_uses_feedback = false;
for (auto &p : passes)
{
for (auto &s : p.samplers)
{
if (s.type == SlangShader::Sampler::PreviousFrame && s.specifier > oldest_previous_frame)
oldest_previous_frame = s.specifier;
if (s.type == SlangShader::Sampler::PassFeedback)
{
uses_feedback = true;
if (s.specifier == (int)passes.size() - 1)
last_pass_uses_feedback = true;
}
}
}
return true;
}
bool SlangPreset::save_to_file(std::string filename)
{
std::ofstream out(filename);
if (!out.is_open())
return false;
auto outs = [&](std::string key, std::string value) { out << key << " = \"" << value << "\"\n"; };
auto outb = [&](std::string key, bool value) { outs(key, value ? "true" : "false"); };
auto outa = [&](std::string key, auto value) { outs(key, to_string(value)); };
outa("shaders", passes.size());
for (size_t i = 0; i < passes.size(); i++)
{
auto &pass = passes[i];
auto indexed = [i](std::string str) { return str + to_string(i); };
outs(indexed("shader"), pass.filename);
outb(indexed("filter_linear"), pass.filter_linear);
outs(indexed("wrap_mode"), pass.wrap_mode);
outs(indexed("alias"), pass.alias);
outb(indexed("float_framebuffer"), pass.float_framebuffer);
outb(indexed("srgb_framebuffer"), pass.srgb_framebuffer);
outb(indexed("mipmap_input"), pass.mipmap_input);
outs(indexed("scale_type_x"), pass.scale_type_x);
outs(indexed("scale_type_y"), pass.scale_type_y);
outa(indexed("scale_x"), pass.scale_x);
outa(indexed("scale_y"), pass.scale_y);
outa(indexed("frame_count_mod"), pass.frame_count_mod);
}
if (parameters.size() > 0)
{
std::string parameter_list = "";
for (size_t i = 0; i < parameters.size(); i++)
{
parameter_list += parameters[i].id;
if (i < parameters.size() - 1)
parameter_list += ";";
}
outs("parameters", parameter_list);
}
for (auto &item : parameters)
outa(item.id, item.val);
if (textures.size() > 0)
{
std::string texture_list = "";
for (size_t i = 0; i < textures.size(); i++)
{
texture_list += textures[i].id;
if (i < textures.size() - 1)
texture_list += ";";
}
outs("textures", texture_list);
}
for (auto &item : textures)
{
outs(item.id, item.filename);
outb(item.id + "_linear", item.linear);
outs(item.id + "_wrap_mode", item.wrap_mode);
outb(item.id + "_mipmap", item.mipmap);
}
out.close();
return true;
}

38
vulkan/slang_preset.hpp Normal file
View File

@ -0,0 +1,38 @@
#pragma once
#include "slang_shader.hpp"
#include "vulkan/vulkan_core.h"
#include <string>
#include <vector>
struct SlangPreset
{
SlangPreset();
~SlangPreset();
void print();
bool load_preset_file(std::string filename);
bool introspect();
bool introspect_shader(SlangShader &s, int index, SlangShader::Stage stage);
bool match_buffer_semantic(const std::string &name, int pass, SlangShader::Uniform::Type &type, int &specifier);
bool match_sampler_semantic(const std::string &name, int pass, SlangShader::Sampler::Type &type, int &specifier);
void gather_parameters();
bool save_to_file(std::string filename);
struct Texture
{
std::string id;
std::string filename;
std::string wrap_mode;
bool mipmap;
bool linear;
};
std::vector<SlangShader> passes;
std::vector<Texture> textures;
std::vector<SlangShader::Parameter> parameters;
int oldest_previous_frame;
bool uses_feedback;
bool last_pass_uses_feedback;
};

145
vulkan/slang_preset_ini.cpp Normal file
View File

@ -0,0 +1,145 @@
#include "slang_preset_ini.hpp"
#include "slang_helpers.hpp"
#include <fstream>
#include <cstring>
#include <charconv>
IniFile::IniFile()
{
}
IniFile::~IniFile()
{
}
static std::string trim_comments(std::string str)
{
for (auto &comment : { "//", "#" })
{
auto location = str.rfind(comment);
if (location != std::string::npos)
str = str.substr(0, location);
}
return trim(str);
}
static std::string trim_quotes(std::string str)
{
if (str.length() > 1 && str.front() == '\"' && str.back() == '\"')
return str.substr(1, str.length() - 2);
return str;
}
bool IniFile::load_file(std::string filename)
{
std::ifstream file;
file.open(filename);
if (!file.is_open())
{
printf("No file %s\n", filename.c_str());
return false;
}
std::string line;
while (1)
{
if (file.eof())
break;
std::getline(file, line);
line = trim(line);
if (line.find("#reference") == 0)
{
std::string reference_path(trim_comments(line.substr(11)));
reference_path = trim_quotes(reference_path);
canonicalize(reference_path, filename);
printf("Loading file %s\n", reference_path.c_str());
if (!load_file(reference_path))
{
printf("Fail %s\n", reference_path.c_str());
return false;
}
}
line = trim_comments(line);
if (line.length() == 0)
continue;
auto equals = line.find('=');
if (equals != std::string::npos)
{
auto left_side = trim_quotes(trim(line.substr(0, equals)));
auto right_side = trim_quotes(trim(line.substr(equals + 1)));
keys.insert_or_assign(left_side, std::make_pair(right_side, filename));
}
}
return true;
}
std::string IniFile::get_string(std::string key, std::string default_value = "")
{
auto it = keys.find(key);
if (it == keys.end())
return default_value;
return it->second.first;
}
int IniFile::get_int(std::string key, int default_value = 0)
{
auto it = keys.find(key);
if (it == keys.end())
return default_value;
return std::stoi(it->second.first);
}
float IniFile::get_float(std::string key, float default_value = 0.0f)
{
auto it = keys.find(key);
if (it == keys.end())
return default_value;
return std::stof(it->second.first);
}
std::string IniFile::get_source(std::string key)
{
auto it = keys.find(key);
if (it == keys.end())
return "";
return it->second.second;
}
bool IniFile::get_bool(std::string key, bool default_value = false)
{
auto it = keys.find(key);
if (it == keys.end())
return default_value;
std::string lower = it->second.first;
for (auto &c : lower)
c = tolower(c);
const char *true_strings[] = { "true", "1", "yes", "on"};
for (auto &s : true_strings)
if (lower == s)
return true;
return false;
}
bool IniFile::exists(std::string key)
{
auto it = keys.find(key);
if (it == keys.end())
return false;
return true;
}

View File

@ -0,0 +1,17 @@
#pragma once
#include <unordered_map>
#include <string>
struct IniFile
{
IniFile();
~IniFile();
bool load_file(std::string filename);
std::string get_string(std::string key, std::string default_string);
int get_int(std::string key, int default_int);
float get_float(std::string key, float default_float);
bool get_bool(std::string key, bool default_bool);
std::string get_source(std::string key);
bool exists(std::string key);
std::unordered_map<std::string, std::pair<std::string, std::string>> keys;
};

View File

@ -0,0 +1,25 @@
#include "slang_preset.hpp"
int main(int argc, char **argv)
{
SlangPreset preset;
if (argc != 2)
{
printf("Usage %s shaderpreset\n", argv[0]);
return -1;
}
bool success = preset.load_preset_file(argv[1]);
if (!success)
{
printf("Failed to load %s\n", argv[1]);
return -1;
}
preset.introspect();
preset.print();
return 0;
}

274
vulkan/slang_shader.cpp Normal file
View File

@ -0,0 +1,274 @@
#include "slang_shader.hpp"
#include "slang_helpers.hpp"
#include <ostream>
#include <string>
#include <string_view>
#include <sstream>
#include <vector>
#include <fstream>
#include "../external/glslang/glslang/Public/ShaderLang.h"
#include "../external/glslang/SPIRV/GlslangToSpv.h"
#include "../external/glslang/StandAlone/ResourceLimits.h"
using std::string;
using std::vector;
SlangShader::SlangShader()
{
ubo_size = 0;
}
SlangShader::~SlangShader()
{
}
/*
Recursively load shader file and included files into memory, applying
#include and #pragma directives. Will strip all directives except
#pragma stage.
*/
bool SlangShader::preprocess_shader_file(string filename, vector<string> &lines)
{
std::ifstream stream(filename);
if (stream.fail())
return false;
string line;
while (std::getline(stream, line, '\n'))
{
std::string_view sv(line);
trim(sv);
if (sv.empty())
continue;
else if (sv.compare(0, 8, "#include") == 0)
{
sv.remove_prefix(8);
trim(sv);
if (sv.length() && sv[0] == '\"' && sv[sv.length() - 1] == '\"')
{
sv.remove_prefix(1);
sv.remove_suffix(1);
string include_file(sv);
canonicalize(include_file, filename);
preprocess_shader_file(include_file, lines);
}
else
{
printf("Syntax error: #include.\n");
return false;
}
}
else if (sv.compare(0, 17, "#pragma parameter") == 0)
{
Parameter p{};
sv.remove_prefix(17);
auto tokens = split_string_quotes(string{ sv });
if (tokens.size() < 5)
{
printf("Syntax error: #pragma parameter\n");
printf("%s\n", string{ sv }.c_str());
return false;
}
p.id = tokens[0];
p.name = tokens[1];
p.val = std::stof(tokens[2]);
p.min = std::stof(tokens[3]);
p.max = std::stof(tokens[4]);
if (tokens.size() >= 6)
p.step = std::stof(tokens[5]);
p.significant_digits = 0;
for (size_t i = 2; i < tokens.size(); i++)
{
int significant_digits = get_significant_digits(tokens[i]);
if (significant_digits > p.significant_digits)
p.significant_digits = significant_digits;
}
parameters.push_back(p);
continue;
}
else if (sv.compare(0, 12, "#pragma name") == 0)
{
alias = sv.substr(13);
}
else if (sv.compare(0, 14, "#pragma format") == 0)
{
format = sv.substr(15);
}
else
{
if (sv.compare(0, 13, "#pragma stage") == 0)
pragma_stage_lines.push_back(lines.size());
lines.push_back(line);
}
}
return true;
}
/*
Use the #pragma stage lines to divide the file into separate vertex and
fragment shaders. Must have called preprocess beforehand.
*/
void SlangShader::divide_into_stages(const std::vector<std::string> &lines)
{
enum
{
vertex,
fragment,
both
} stage;
stage = both;
std::ostringstream vertex_shader_stream;
std::ostringstream fragment_shader_stream;
auto p = pragma_stage_lines.begin();
for (size_t i = 0; i < lines.size(); i++)
{
if (p != pragma_stage_lines.end() && i == *p)
{
if (lines[i].find("vertex") != string::npos)
stage = vertex;
else if (lines[i].find("fragment") != string::npos)
stage = fragment;
p++;
}
else
{
if (stage == vertex || stage == both)
vertex_shader_stream << lines[i] << '\n';
if (stage == fragment || stage == both)
fragment_shader_stream << lines[i] << '\n';
}
}
vertex_shader_string = vertex_shader_stream.str();
fragment_shader_string = fragment_shader_stream.str();
}
/*
Load a shader file into memory, preprocess, divide and compile it to
SPIRV bytecode. Returns true on success.
*/
bool SlangShader::load_file(string new_filename)
{
if (!new_filename.empty())
filename = new_filename;
pragma_stage_lines.clear();
vector<string> lines;
if (!preprocess_shader_file(filename, lines))
{
printf("Failed to load shader file: %s\n", filename.c_str());
return false;
}
divide_into_stages(lines);
if (!generate_spirv())
return false;
return true;
}
static void Initializeglslang()
{
static bool ProcessInitialized = false;
if (!ProcessInitialized)
{
glslang::InitializeProcess();
ProcessInitialized = true;
}
}
std::vector<uint32_t> SlangShader::generate_spirv(std::string shader_string, std::string stage)
{
Initializeglslang();
const EShMessages messages = (EShMessages)(EShMsgDefault | EShMsgVulkanRules | EShMsgSpvRules);
string debug;
auto forbid_includer = glslang::TShader::ForbidIncluder();
auto language = stage == "vertex" ? EShLangVertex : stage == "fragment" ? EShLangFragment : EShLangCompute;
glslang::TShader shaderTShader(language);
auto compile = [&debug, &forbid_includer](glslang::TShader &shader, string &shader_string, std::vector<uint32_t> &spirv) -> bool {
const char *source = shader_string.c_str();
shader.setStrings(&source, 1);
if (!shader.preprocess(&glslang::DefaultTBuiltInResource, 450, ENoProfile, false, false, messages, &debug, forbid_includer))
return false;
if (!shader.parse(&glslang::DefaultTBuiltInResource, 450, false, messages))
return false;
glslang::TProgram program;
program.addShader(&shader);
if (!program.link(messages))
return false;
glslang::GlslangToSpv(*program.getIntermediate(shader.getStage()), spirv);
return true;
};
std::vector<uint32_t> spirv;
if (!compile(shaderTShader, shader_string, spirv))
{
printf("%s\n%s\n%s\n", debug.c_str(), shaderTShader.getInfoLog(), shaderTShader.getInfoDebugLog());
}
return spirv;
}
/*
Generate SPIRV from separate preprocessed fragment and vertex shaders.
Must have called divide_into_stages beforehand. Returns true on success.
*/
bool SlangShader::generate_spirv()
{
Initializeglslang();
const EShMessages messages = (EShMessages)(EShMsgDefault | EShMsgVulkanRules | EShMsgSpvRules | EShMsgDebugInfo | EShMsgAST | EShMsgEnhanced);
auto forbid_includer = glslang::TShader::ForbidIncluder();
glslang::TShader vertexTShader(EShLangVertex);
glslang::TShader fragmentTShader(EShLangFragment);
auto compile = [&forbid_includer](glslang::TShader &shader, string &shader_string, std::vector<uint32_t> &spirv) -> bool {
const char *source = shader_string.c_str();
shader.setStrings(&source, 1);
if (!shader.parse(&glslang::DefaultTBuiltInResource, 450, false, messages, forbid_includer))
return false;
glslang::TProgram program;
program.addShader(&shader);
if (!program.link(messages))
return false;
glslang::GlslangToSpv(*program.getIntermediate(shader.getStage()), spirv);
return true;
};
if (!compile(vertexTShader, vertex_shader_string, vertex_shader_spirv))
{
printf("%s\n%s\n", vertexTShader.getInfoLog(), vertexTShader.getInfoDebugLog());
return false;
}
if (!compile(fragmentTShader, fragment_shader_string, fragment_shader_spirv))
{
printf("%s\n%s\n", fragmentTShader.getInfoLog(), fragmentTShader.getInfoDebugLog());
return false;
}
return true;
}

103
vulkan/slang_shader.hpp Normal file
View File

@ -0,0 +1,103 @@
#pragma once
#include <string>
#include <vector>
struct SlangShader
{
struct Parameter
{
std::string name;
std::string id;
float min;
float max;
float val;
float step;
int significant_digits;
};
struct Uniform
{
enum Block
{
UBO,
PushConstant,
};
enum Type
{
ViewportSize,
PreviousFrameSize,
PassSize,
PassFeedbackSize,
LutSize,
MVP,
FrameCount,
Parameter,
FrameDirection
};
Block block;
size_t offset;
Type type;
int specifier;
};
struct Sampler
{
enum Type
{
PreviousFrame,
Pass,
PassFeedback,
Lut
};
Type type;
int specifier;
int binding;
};
enum class Stage
{
Vertex,
Fragment
};
SlangShader();
~SlangShader();
bool preprocess_shader_file(std::string filename, std::vector<std::string> &lines);
void set_base_path(std::string filename);
bool load_file(std::string new_filename = "");
void divide_into_stages(const std::vector<std::string> &lines);
bool generate_spirv();
static std::vector<uint32_t> generate_spirv(std::string shader_string, std::string stage);
std::string filename;
std::string alias;
bool filter_linear;
bool mipmap_input;
bool float_framebuffer;
bool srgb_framebuffer;
int frame_count_mod;
std::string wrap_mode;
std::string scale_type_x;
std::string scale_type_y;
float scale_x;
float scale_y;
std::string format;
std::vector<Parameter> parameters;
std::vector<size_t> pragma_stage_lines;
std::string vertex_shader_string;
std::string fragment_shader_string;
std::vector<uint32_t> vertex_shader_spirv;
std::vector<uint32_t> fragment_shader_spirv;
size_t push_constant_block_size;
size_t ubo_size;
int ubo_binding;
std::vector<Uniform> uniforms;
std::vector<Sampler> samplers;
};

80
vulkan/test.cpp Normal file
View File

@ -0,0 +1,80 @@
#include <gtkmm.h>
#include <gdk/gdkx.h>
#include "vk2d.hpp"
int main(int argc, char *argv[])
{
XInitThreads();
auto application = Gtk::Application::create(argc, argv, "org.bearoso.vulkantest");
Gtk::Window window;
Gtk::Button button;
Gtk::DrawingArea drawingarea;
Gtk::VBox vbox;
window.set_title("Vulkan Test");
window.set_events(Gdk::EventMask::ALL_EVENTS_MASK);
button.set_label("Close");
vbox.pack_start(drawingarea, true, true, 0);
vbox.pack_start(button, false, false, 0);
vbox.set_spacing(5);
button.set_hexpand(false);
button.set_halign(Gtk::ALIGN_END);
window.add(vbox);
window.set_border_width(5);
vbox.show_all();
button.signal_clicked().connect([&] {
window.close();
});
window.resize(640, 480);
window.show_all();
Window xid = gdk_x11_window_get_xid(drawingarea.get_window()->gobj());
Display *dpy = gdk_x11_display_get_xdisplay(drawingarea.get_display()->gobj());
vk2d vk2d;
vk2d.init_xlib_instance();
vk2d.attach(dpy, xid);
vk2d.init_device();
drawingarea.signal_configure_event().connect([&](GdkEventConfigure *event) {
vk2d.recreate_swapchain();
return false;
});
window.signal_key_press_event().connect([&](GdkEventKey *key) -> bool {
printf ("Key press %d\n", key->keyval);
return false;
}, false);
window.signal_key_release_event().connect([&](GdkEventKey *key) -> bool {
printf ("Key release %d\n", key->keyval);
return false;
}, false);
drawingarea.set_app_paintable(true);
drawingarea.signal_draw().connect([&](const Cairo::RefPtr<Cairo::Context> &context) -> bool {
return true;
});
auto id = Glib::signal_idle().connect([&]{
vk2d.draw();
vk2d.wait_idle();
return true;
});
window.signal_delete_event().connect([&](GdkEventAny *event) {
id.disconnect();
return false;
});
application->run(window);
return 0;
}

641
vulkan/vk2d.cpp Normal file
View File

@ -0,0 +1,641 @@
#include "vk2d.hpp"
#include <glslang/Include/BaseTypes.h>
#include <glslang/Public/ShaderLang.h>
#include <glslang/SPIRV/GlslangToSpv.h>
#include <glslang/Include/ResourceLimits.h>
#include <vulkan/vulkan_core.h>
static const char *vertex_shader = R"(
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(location = 0) out vec3 fragColor;
vec2 positions[3] = vec2[](
vec2(0.0, -0.5),
vec2(0.5, 0.5),
vec2(-0.5, 0.5)
);
vec3 colors[3] = vec3[](
vec3(1.0, 0.0, 0.0),
vec3(0.0, 1.0, 0.0),
vec3(0.0, 0.0, 1.0)
);
void main() {
gl_Position = vec4(positions[gl_VertexIndex], 0.0, 1.0);
fragColor = colors[gl_VertexIndex];
}
)";
static const char *fragment_shader = R"(
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(location = 0) in vec3 fragColor;
layout(location = 0) out vec4 outColor;
void main() {
outColor = vec4(fragColor, 1.0);
}
)";
bool vk2d::dispatcher_initialized = false;
vk2d::vk2d()
{
instance = nullptr;
surface = nullptr;
if (!dispatcher_initialized)
{
vk::DynamicLoader *dl = new vk::DynamicLoader;
PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr = dl->getProcAddress<PFN_vkGetInstanceProcAddr>("vkGetInstanceProcAddr");
VULKAN_HPP_DEFAULT_DISPATCHER.init(vkGetInstanceProcAddr);
dispatcher_initialized = true;
}
}
vk2d::~vk2d()
{
deinit();
}
bool vk2d::init_device()
{
if (!instance || !surface)
return false;
choose_physical_device();
create_device();
create_sync_objects();
create_swapchain();
create_render_pass();
create_pipeline();
create_framebuffers();
create_command_buffers();
return true;
}
#ifdef VK_USE_PLATFORM_XLIB_KHR
bool vk2d::init_xlib_instance()
{
if (instance)
return true;
std::vector<const char *> extensions = { VK_KHR_XLIB_SURFACE_EXTENSION_NAME, VK_KHR_SURFACE_EXTENSION_NAME };
vk::ApplicationInfo ai({}, {}, {}, {}, VK_API_VERSION_1_0);
vk::InstanceCreateInfo ci({}, &ai, {}, extensions);
auto rv = vk::createInstance(ci);
if (rv.result != vk::Result::eSuccess)
return false;
instance = rv.value;
VULKAN_HPP_DEFAULT_DISPATCHER.init(instance);
return true;
}
void vk2d::attach(Display *dpy, Window xid)
{
if (surface)
{
instance.destroySurfaceKHR(surface);
surface = nullptr;
}
vk::XlibSurfaceCreateInfoKHR sci({}, dpy, xid);
auto rv = instance.createXlibSurfaceKHR(sci);
VK_CHECK(rv.result);
surface = rv.value;
}
#endif // VK_USE_PLATFORM_XLIB_KHR
bool vk2d::create_instance()
{
std::vector<const char *> extensions = { VK_KHR_XLIB_SURFACE_EXTENSION_NAME, VK_KHR_SURFACE_EXTENSION_NAME };
vk::ApplicationInfo ai({}, {}, {}, {}, VK_API_VERSION_1_1);
vk::InstanceCreateInfo ci({}, &ai, {}, {}, extensions.size(), extensions.data());
auto rv = vk::createInstance(ci);
VK_CHECK(rv.result);
instance = rv.value;
VULKAN_HPP_DEFAULT_DISPATCHER.init(instance);
return true;
}
void vk2d::deinit()
{
destroy_swapchain();
frame_queue.clear();
if (command_pool)
device.destroyCommandPool(command_pool);
if (device)
device.destroy();
if (surface)
instance.destroySurfaceKHR(surface);
if (instance)
instance.destroy();
}
void vk2d::destroy_swapchain()
{
if (device)
{
VK_CHECK(device.waitIdle());
}
swapchain.framebuffers.clear();
swapchain.views.clear();
device.freeCommandBuffers(command_pool, swapchain.command_buffers);
if (graphics_pipeline)
device.destroyPipeline(graphics_pipeline);
if (render_pass)
device.destroyRenderPass(render_pass);
if (pipeline_layout)
device.destroyPipelineLayout(pipeline_layout);
if (swapchain.obj)
{
device.destroySwapchainKHR(swapchain.obj);
swapchain.obj = nullptr;
}
}
void vk2d::recreate_swapchain()
{
frame_queue_index = 0;
destroy_swapchain();
create_swapchain();
create_render_pass();
create_pipeline();
create_framebuffers();
create_command_buffers();
}
void vk2d::choose_physical_device()
{
if (!surface)
assert(0);
auto devices = instance.enumeratePhysicalDevices();
VK_CHECK(devices.result);
std::vector<vk::PhysicalDevice> candidates;
for (auto &d : devices.value)
{
auto extension_properties = d.enumerateDeviceExtensionProperties();
VK_CHECK(extension_properties.result);
bool presentable = false;
for (auto &e : extension_properties.value)
{
std::string name = e.extensionName;
if (name == VK_KHR_SWAPCHAIN_EXTENSION_NAME)
{
presentable = true;
}
}
if (!presentable)
continue;
auto queue_families = d.getQueueFamilyProperties();
for (size_t q = 0; q < queue_families.size(); q++)
{
if (queue_families[q].queueFlags & vk::QueueFlagBits::eGraphics)
{
graphics_queue_index = q;
presentable = true;
break;
}
presentable = false;
}
presentable = presentable && d.getSurfaceSupportKHR(graphics_queue_index, surface).value;
if (presentable)
{
printf("Using %s\n", (char *)d.getProperties().deviceName);
physical_device = d;
return;
}
}
physical_device = nullptr;
graphics_queue_index = -1;
}
void vk2d::wait_idle()
{
if (device)
{
auto result = device.waitIdle();
VK_CHECK(result);
}
}
void vk2d::draw()
{
auto &frame = frame_queue[frame_queue_index];
uint32_t next;
VK_CHECK(device.waitForFences(1, &frame.fence.get(), true, UINT64_MAX));
vk::ResultValue resval = device.acquireNextImageKHR(swapchain.obj, UINT64_MAX, frame.image_ready.get(), nullptr);
if (resval.result != vk::Result::eSuccess)
{
if (resval.result == vk::Result::eErrorOutOfDateKHR)
{
printf("Recreating swapchain\n");
recreate_swapchain();
return;
}
VK_CHECK(resval.result);
exit(1);
}
next = resval.value;
if (swapchain.frame_fence[next] > -1)
{
VK_CHECK(device.waitForFences(1, &frame_queue[next].fence.get(), true, UINT64_MAX));
}
swapchain.frame_fence[next] = frame_queue_index;
VK_CHECK(device.resetFences(1, &frame.fence.get()));
vk::PipelineStageFlags flags = vk::PipelineStageFlagBits::eColorAttachmentOutput;
vk::SubmitInfo submit_info(frame.image_ready.get(),
flags,
swapchain.command_buffers[next],
frame.render_finished.get());
VK_CHECK(queue.submit(submit_info, frame.fence.get()));
vk::PresentInfoKHR present_info(frame.render_finished.get(), swapchain.obj, next, {});
VK_CHECK(queue.presentKHR(present_info));
frame_queue_index = (frame_queue_index + 1) % frame_queue_size;
}
void vk2d::create_device()
{
float queue_priority = 1.0f;
std::vector<const char *> extension_names = { VK_KHR_SWAPCHAIN_EXTENSION_NAME };
vk::DeviceQueueCreateInfo dqci({}, graphics_queue_index, 1, &queue_priority);
vk::DeviceCreateInfo dci;
dci.setPEnabledExtensionNames(extension_names);
std::vector<vk::DeviceQueueCreateInfo> pqci = {dqci};
dci.setQueueCreateInfos(pqci);
device = physical_device.createDevice(dci).value;
queue = device.getQueue(graphics_queue_index, 0);
vk::CommandPoolCreateInfo command_pool_info({}, graphics_queue_index);
command_pool = device.createCommandPool(command_pool_info).value;
}
void vk2d::create_swapchain()
{
if (!device || !surface)
assert(0);
vk::SurfaceCapabilitiesKHR surface_caps = physical_device.getSurfaceCapabilitiesKHR(surface).value;
swapchain.size = surface_caps.minImageCount;
vk::SwapchainCreateInfoKHR sci;
sci
.setSurface(surface)
.setMinImageCount(swapchain.size)
.setPresentMode(vk::PresentModeKHR::eFifo)
.setImageFormat(vk::Format::eB8G8R8A8Unorm)
.setImageExtent(surface_caps.currentExtent)
.setImageColorSpace(vk::ColorSpaceKHR::eSrgbNonlinear)
.setImageArrayLayers(1)
.setImageSharingMode(vk::SharingMode::eExclusive)
.setImageUsage(vk::ImageUsageFlagBits::eColorAttachment)
.setCompositeAlpha(vk::CompositeAlphaFlagBitsKHR::eOpaque)
.setClipped(true);
if (swapchain.obj)
sci.setOldSwapchain(swapchain.obj);
swapchain.obj = device.createSwapchainKHR(sci).value;
swapchain.extents = surface_caps.currentExtent;
swapchain.images = device.getSwapchainImagesKHR(swapchain.obj).value;
swapchain.views.resize(swapchain.size);
for (size_t i = 0; i < swapchain.size; i++)
{
vk::ImageViewCreateInfo image_view_create_info;
image_view_create_info
.setImage(swapchain.images[i])
.setViewType(vk::ImageViewType::e2D)
.setFormat(vk::Format::eB8G8R8A8Unorm)
.setComponents(vk::ComponentMapping())
.setSubresourceRange(vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, 0, 1, 0, 1));
swapchain.views[i] = device.createImageViewUnique(image_view_create_info).value;
}
swapchain.frame_fence.resize(swapchain.size);
for (auto &f : swapchain.frame_fence)
f = -1;
}
void vk2d::create_sync_objects()
{
frame_queue.resize(frame_queue_size);
for (size_t i = 0; i < frame_queue_size; i++)
{
vk::SemaphoreCreateInfo semaphore_create_info;
frame_queue[i].image_ready = device.createSemaphoreUnique(semaphore_create_info).value;
frame_queue[i].render_finished = device.createSemaphoreUnique(semaphore_create_info).value;
vk::FenceCreateInfo fence_create_info(vk::FenceCreateFlagBits::eSignaled);
frame_queue[i].fence = device.createFenceUnique(fence_create_info).value;
}
frame_queue_index = 0;
}
namespace glslang
{
extern const TBuiltInResource DefaultTBuiltInResource;
}
void vk2d::create_shader_modules()
{
glslang::InitializeProcess();
EShMessages message_flags = (EShMessages)(EShMsgDefault | EShMsgVulkanRules | EShMsgSpvRules);
glslang::TShader vertex(EShLangVertex);
glslang::TShader fragment(EShLangFragment);
vertex.setStrings(&vertex_shader, 1);
fragment.setStrings(&fragment_shader, 1);
vertex.parse(&glslang::DefaultTBuiltInResource, 450, true, message_flags);
fragment.parse(&glslang::DefaultTBuiltInResource, 450, true, message_flags);
glslang::TProgram vertex_program;
glslang::TProgram fragment_program;
vertex_program.addShader(&vertex);
fragment_program.addShader(&fragment);
vertex_program.link(message_flags);
fragment_program.link(message_flags);
auto log = [](const char *msg)
{
if (msg != nullptr && msg[0] != '\0')
puts(msg);
};
log(vertex_program.getInfoLog());
log(vertex_program.getInfoDebugLog());
log(fragment_program.getInfoLog());
log(fragment_program.getInfoDebugLog());
std::vector<uint32_t> vertex_spirv;
std::vector<uint32_t> fragment_spirv;
glslang::GlslangToSpv(*vertex_program.getIntermediate(EShLangVertex), vertex_spirv);
glslang::GlslangToSpv(*fragment_program.getIntermediate(EShLangFragment), fragment_spirv);
vk::ShaderModuleCreateInfo smci;
smci.setCode(vertex_spirv);
vertex_module = device.createShaderModule(smci).value;
smci.setCode(fragment_spirv);
fragment_module = device.createShaderModule(smci).value;
}
void vk2d::create_pipeline()
{
create_shader_modules();
if (!vertex_module || !fragment_module)
assert(0);
vk::PipelineShaderStageCreateInfo vertex_ci;
vertex_ci
.setStage(vk::ShaderStageFlagBits::eVertex)
.setModule(vertex_module)
.setPName("main");
vk::PipelineShaderStageCreateInfo fragment_ci;
fragment_ci
.setStage(vk::ShaderStageFlagBits::eFragment)
.setModule(fragment_module)
.setPName("main");
std::vector<vk::PipelineShaderStageCreateInfo> stages = { vertex_ci, fragment_ci };
vk::PipelineVertexInputStateCreateInfo vertex_input_info;
vertex_input_info
.setVertexBindingDescriptionCount(0)
.setVertexAttributeDescriptionCount(0);
// Add Vertex attributes here
vk::PipelineInputAssemblyStateCreateInfo pipeline_input_assembly_info;
pipeline_input_assembly_info
.setTopology(vk::PrimitiveTopology::eTriangleList)
.setPrimitiveRestartEnable(false);
std::vector<vk::Viewport> viewports(1);
viewports[0]
.setX(0.0f)
.setY(0.0f)
.setWidth(swapchain.extents.width)
.setHeight(swapchain.extents.height)
.setMinDepth(0.0f)
.setMaxDepth(1.0f);
std::vector<vk::Rect2D> scissors(1);
scissors[0].extent = swapchain.extents;
scissors[0].offset = vk::Offset2D(0, 0);
vk::PipelineViewportStateCreateInfo pipeline_viewport_info;
pipeline_viewport_info
.setViewports(viewports)
.setScissors(scissors);
vk::PipelineRasterizationStateCreateInfo rasterizer_info;
rasterizer_info
.setCullMode(vk::CullModeFlagBits::eBack)
.setFrontFace(vk::FrontFace::eClockwise)
.setLineWidth(1.0f)
.setDepthClampEnable(false)
.setRasterizerDiscardEnable(false)
.setPolygonMode(vk::PolygonMode::eFill)
.setDepthBiasEnable(false)
.setRasterizerDiscardEnable(false);
vk::PipelineMultisampleStateCreateInfo multisample_info;
multisample_info
.setSampleShadingEnable(false)
.setRasterizationSamples(vk::SampleCountFlagBits::e1);
vk::PipelineDepthStencilStateCreateInfo depth_stencil_info;
depth_stencil_info.setDepthTestEnable(false);
vk::PipelineColorBlendAttachmentState blend_attachment_info;
blend_attachment_info
.setColorWriteMask(vk::ColorComponentFlagBits::eB |
vk::ColorComponentFlagBits::eG |
vk::ColorComponentFlagBits::eR |
vk::ColorComponentFlagBits::eA)
.setBlendEnable(true)
.setColorBlendOp(vk::BlendOp::eAdd)
.setSrcColorBlendFactor(vk::BlendFactor::eSrcAlpha)
.setDstColorBlendFactor(vk::BlendFactor::eOneMinusSrcAlpha)
.setAlphaBlendOp(vk::BlendOp::eAdd)
.setSrcAlphaBlendFactor(vk::BlendFactor::eOne)
.setSrcAlphaBlendFactor(vk::BlendFactor::eZero);
vk::PipelineColorBlendStateCreateInfo blend_state_info;
blend_state_info
.setLogicOpEnable(false)
.setAttachmentCount(1)
.setPAttachments(&blend_attachment_info);
vk::PipelineDynamicStateCreateInfo dynamic_state_info;
dynamic_state_info.setDynamicStateCount(0);
vk::PipelineLayoutCreateInfo pipeline_layout_info;
pipeline_layout_info
.setSetLayoutCount(0)
.setPushConstantRangeCount(0);
pipeline_layout = device.createPipelineLayout(pipeline_layout_info).value;
vk::GraphicsPipelineCreateInfo pipeline_create_info;
pipeline_create_info
.setStageCount(2)
.setStages(stages)
.setPVertexInputState(&vertex_input_info)
.setPInputAssemblyState(&pipeline_input_assembly_info)
.setPViewportState(&pipeline_viewport_info)
.setPRasterizationState(&rasterizer_info)
.setPMultisampleState(&multisample_info)
.setPDepthStencilState(&depth_stencil_info)
.setPColorBlendState(&blend_state_info)
.setPDynamicState(&dynamic_state_info)
.setLayout(pipeline_layout)
.setRenderPass(render_pass)
.setSubpass(0);
vk::ResultValue<vk::Pipeline> result = device.createGraphicsPipeline(nullptr, pipeline_create_info);
graphics_pipeline = result.value;
device.destroyShaderModule(vertex_module);
device.destroyShaderModule(fragment_module);
}
void vk2d::create_render_pass()
{
vk::AttachmentDescription attachment_description(
{},
vk::Format::eB8G8R8A8Unorm,
vk::SampleCountFlagBits::e1,
vk::AttachmentLoadOp::eClear,
vk::AttachmentStoreOp::eStore,
vk::AttachmentLoadOp::eLoad,
vk::AttachmentStoreOp::eStore,
vk::ImageLayout::eUndefined,
vk::ImageLayout::ePresentSrcKHR);
vk::AttachmentReference attachment_reference(0, vk::ImageLayout::eColorAttachmentOptimal);
attachment_reference
.setAttachment(0)
.setLayout(vk::ImageLayout::eColorAttachmentOptimal);
vk::SubpassDependency subpass_dependency;
subpass_dependency
.setSrcSubpass(VK_SUBPASS_EXTERNAL)
.setDstSubpass(0)
.setSrcStageMask(vk::PipelineStageFlagBits::eColorAttachmentOutput)
.setSrcAccessMask(vk::AccessFlagBits(0))
.setDstStageMask(vk::PipelineStageFlagBits::eColorAttachmentOutput)
.setDstAccessMask(vk::AccessFlagBits::eColorAttachmentWrite);
vk::SubpassDescription subpass_description;
subpass_description
.setPipelineBindPoint(vk::PipelineBindPoint::eGraphics)
.setColorAttachments(attachment_reference);
vk::RenderPassCreateInfo render_pass_info(
{},
attachment_description,
subpass_description,
subpass_dependency);
render_pass = device.createRenderPass(render_pass_info).value;
}
void vk2d::create_framebuffers()
{
swapchain.framebuffers.resize(swapchain.images.size());
for (size_t i = 0; i < swapchain.images.size(); i++)
{
vk::FramebufferCreateInfo fci;
std::vector<vk::ImageView> attachments = { swapchain.views[i].get() };
fci
.setAttachments(attachments)
.setLayers(1)
.setRenderPass(render_pass)
.setWidth(swapchain.extents.width)
.setHeight(swapchain.extents.height);
swapchain.framebuffers[i] = device.createFramebufferUnique(fci).value;
}
}
void vk2d::create_command_buffers()
{
vk::CommandBufferAllocateInfo allocate_info;
allocate_info.setCommandBufferCount(swapchain.images.size());
allocate_info.setCommandPool(command_pool);
swapchain.command_buffers = device.allocateCommandBuffers(allocate_info).value;
for (size_t i = 0; i < swapchain.command_buffers.size(); i++)
{
auto &cb = swapchain.command_buffers[i];
VK_CHECK(cb.begin(vk::CommandBufferBeginInfo{}));
vk::ClearColorValue color;
color.setFloat32({ 0.0f, 1.0f, 1.0f, 0.0f});
vk::ClearValue clear_value(color);
vk::RenderPassBeginInfo rpbi;
rpbi.setRenderPass(render_pass);
rpbi.setFramebuffer(swapchain.framebuffers[i].get());
rpbi.setPClearValues(&clear_value);
rpbi.setClearValueCount(1);
rpbi.renderArea.setExtent(swapchain.extents);
rpbi.renderArea.setOffset({0, 0});
cb.beginRenderPass(rpbi, vk::SubpassContents::eInline);
cb.bindPipeline(vk::PipelineBindPoint::eGraphics, graphics_pipeline);
cb.draw(3, 1, 0, 0);
cb.endRenderPass();
VK_CHECK(cb.end());
}
}

88
vulkan/vk2d.hpp Normal file
View File

@ -0,0 +1,88 @@
#pragma once
#include "vulkan/vulkan.hpp"
#include <fstream>
#ifdef VK_USE_PLATFORM_XLIB_KHR
#include <X11/Xlib.h>
#endif
#ifdef VK_USE_PLATFORM_WAYLAND_KHR
#include <wayland-client.h>
#endif
#define VK_CHECK(result) vk_check_result_function(result, __FILE__, __LINE__)
inline void vk_check_result_function(vk::Result result, const char *file, int line)
{
if (result != vk::Result::eSuccess)
{
printf("%s:%d Vulkan error: %s\n", file, line, vk::to_string(result).c_str());
}
}
class vk2d
{
public:
vk2d();
~vk2d();
#ifdef VK_USE_PLATFORM_XLIB_KHR
bool init_xlib_instance();
void attach(Display *dpy, Window xid);
#endif
bool init_device();
void deinit();
bool create_instance();
void choose_physical_device();
void create_device();
void create_swapchain();
void create_sync_objects();
void create_shader_modules();
void create_pipeline();
void create_render_pass();
void create_framebuffers();
void create_command_buffers();
void recreate_swapchain();
void destroy_swapchain();
void wait_idle();
void draw();
vk::Instance instance;
vk::PhysicalDevice physical_device;
vk::SurfaceKHR surface;
vk::Pipeline graphics_pipeline;
vk::PipelineLayout pipeline_layout;
vk::RenderPass render_pass;
vk::ShaderModule vertex_module;
vk::ShaderModule fragment_module;
vk::Device device;
vk::Queue queue;
vk::CommandPool command_pool;
size_t graphics_queue_index;
struct frame_t {
vk::UniqueSemaphore render_finished;
vk::UniqueSemaphore image_ready;
vk::UniqueFence fence;
};
static const size_t frame_queue_size = 3;
size_t frame_queue_index;
std::vector<frame_t> frame_queue;
struct {
vk::SwapchainKHR obj;
vk::Extent2D extents;
std::vector<vk::Image> images;
std::vector<vk::UniqueImageView> views;
std::vector<vk::UniqueFramebuffer> framebuffers;
std::vector<vk::CommandBuffer> command_buffers;
std::vector<int> frame_fence;
size_t size;
} swapchain;
static bool dispatcher_initialized;
};

View File

@ -0,0 +1,2 @@
#define VMA_IMPLEMENTATION
#include "vk_mem_alloc.hpp"

200
vulkan/vulkan_context.cpp Normal file
View File

@ -0,0 +1,200 @@
#include <exception>
#include <cstring>
#include <tuple>
#include "vulkan_context.hpp"
#include "vk_mem_alloc.hpp"
#include "slang_shader.hpp"
#include "vulkan/vulkan.hpp"
namespace Vulkan
{
Context::Context()
{
auto dl = new vk::DynamicLoader;
auto vkGetInstanceProcAddr =
dl->getProcAddress<PFN_vkGetInstanceProcAddr>("vkGetInstanceProcAddr");
VULKAN_HPP_DEFAULT_DISPATCHER.init(vkGetInstanceProcAddr);
}
Context::~Context()
{
if (!device)
return;
device.waitIdle();
swapchain = nullptr;
command_pool.reset();
descriptor_pool.reset();
allocator.destroy();
surface.reset();
device.waitIdle();
device.destroy();
}
bool Context::init(Display *dpy, Window xid, int preferred_device)
{
if (instance)
return false;
xlib_display = dpy;
xlib_window = xid;
std::vector<const char *> extensions = { VK_KHR_XLIB_SURFACE_EXTENSION_NAME, VK_KHR_SURFACE_EXTENSION_NAME };
vk::ApplicationInfo application_info({}, {}, {}, {}, VK_API_VERSION_1_0);
vk::InstanceCreateInfo instance_create_info({}, &application_info, {}, extensions);
instance = vk::createInstanceUnique(instance_create_info);
VULKAN_HPP_DEFAULT_DISPATCHER.init(instance.get());
surface = instance->createXlibSurfaceKHRUnique({ {}, xlib_display, xlib_window });
init_device(preferred_device);
init_vma();
init_command_pool();
init_descriptor_pool();
create_swapchain();
device.waitIdle();
return true;
}
bool Context::init_descriptor_pool()
{
auto descriptor_pool_size = vk::DescriptorPoolSize{}
.setDescriptorCount(9)
.setType(vk::DescriptorType::eCombinedImageSampler);
auto descriptor_pool_create_info = vk::DescriptorPoolCreateInfo{}
.setPoolSizes(descriptor_pool_size)
.setMaxSets(20)
.setFlags(vk::DescriptorPoolCreateFlagBits::eFreeDescriptorSet);
descriptor_pool = device.createDescriptorPoolUnique(descriptor_pool_create_info);
return true;
}
bool Context::init_command_pool()
{
vk::CommandPoolCreateInfo cpci({}, graphics_queue_family_index);
cpci.setFlags(vk::CommandPoolCreateFlagBits::eResetCommandBuffer);
command_pool = device.createCommandPoolUnique(cpci);
return true;
}
bool Context::init_device(int preferred_device)
{
auto device_list = instance->enumeratePhysicalDevices();
auto find_device = [&]() -> vk::PhysicalDevice {
for (auto &d : device_list)
{
auto ep = d.enumerateDeviceExtensionProperties();
auto exists = std::find_if(ep.begin(), ep.end(), [](vk::ExtensionProperties &ext) {
return (std::string(ext.extensionName) == VK_KHR_SWAPCHAIN_EXTENSION_NAME);
});
if (exists != ep.end())
return d;
}
return device_list[0];
};
if (preferred_device > 0)
physical_device = device_list[preferred_device];
else
physical_device = find_device();
physical_device.getProperties(&physical_device_props);
graphics_queue_family_index = UINT32_MAX;
auto queue_props = physical_device.getQueueFamilyProperties();
for (size_t i = 0; i < queue_props.size(); i++)
{
if (queue_props[i].queueFlags & vk::QueueFlagBits::eGraphics)
graphics_queue_family_index = i;
}
if (graphics_queue_family_index == UINT32_MAX)
return false;
std::vector<const char *> extension_names = { VK_KHR_SWAPCHAIN_EXTENSION_NAME };
std::vector<float> priorities { 1.0f };
vk::DeviceQueueCreateInfo dqci({}, graphics_queue_family_index, priorities);
vk::DeviceCreateInfo dci({}, dqci, {}, extension_names, {});
device = physical_device.createDevice(dci);
queue = device.getQueue(graphics_queue_family_index, 0);
auto surface_formats = physical_device.getSurfaceFormatsKHR(surface.get());
auto format = std::find_if(surface_formats.begin(), surface_formats.end(), [](vk::SurfaceFormatKHR &f) {
return (f.format == vk::Format::eB8G8R8A8Unorm);
});
if (format == surface_formats.end())
return false;
return true;
}
bool Context::init_vma()
{
auto vulkan_functions = vma::VulkanFunctions{}
.setVkGetInstanceProcAddr(VULKAN_HPP_DEFAULT_DISPATCHER.vkGetInstanceProcAddr)
.setVkGetDeviceProcAddr(VULKAN_HPP_DEFAULT_DISPATCHER.vkGetDeviceProcAddr);
auto allocator_create_info = vma::AllocatorCreateInfo{}
.setDevice(device)
.setInstance(instance.get())
.setPhysicalDevice(physical_device)
.setPVulkanFunctions(&vulkan_functions);
allocator = vma::createAllocator(allocator_create_info);
return true;
}
bool Context::create_swapchain()
{
wait_idle();
swapchain = std::make_unique<Swapchain>(device, physical_device, queue, surface.get(), command_pool.get());
return swapchain->create(3);
}
bool Context::recreate_swapchain()
{
return swapchain->recreate();
}
void Context::wait_idle()
{
if (device)
device.waitIdle();
}
vk::CommandBuffer Context::begin_cmd_buffer()
{
vk::CommandBufferAllocateInfo command_buffer_allocate_info(command_pool.get(), vk::CommandBufferLevel::ePrimary, 1);
auto command_buffer = device.allocateCommandBuffers(command_buffer_allocate_info);
one_time_use_cmd = command_buffer[0];
one_time_use_cmd.begin({ vk::CommandBufferUsageFlagBits::eOneTimeSubmit });
return one_time_use_cmd;
}
void Context::end_cmd_buffer()
{
one_time_use_cmd.end();
vk::SubmitInfo submit_info{};
submit_info.setCommandBuffers(one_time_use_cmd);
queue.submit(vk::SubmitInfo());
queue.waitIdle();
device.freeCommandBuffers(command_pool.get(), one_time_use_cmd);
one_time_use_cmd = nullptr;
}
} // namespace Vulkan

54
vulkan/vulkan_context.hpp Normal file
View File

@ -0,0 +1,54 @@
#pragma once
#include "X11/Xlib.h"
#include "vk_mem_alloc.hpp"
#include "vulkan/vulkan.hpp"
#include "vulkan_swapchain.hpp"
#include <memory>
#include <optional>
namespace Vulkan
{
class Context
{
public:
Context();
~Context();
bool init(Display *dpy, Window xid, int preferred_device = 0);
bool create_swapchain();
bool recreate_swapchain();
void wait_idle();
vk::CommandBuffer begin_cmd_buffer();
void end_cmd_buffer();
vma::Allocator allocator;
vk::Device device;
vk::Queue queue;
vk::UniqueCommandPool command_pool;
vk::UniqueDescriptorPool descriptor_pool;
std::unique_ptr<Swapchain> swapchain;
private:
bool init_vma();
bool init_device(int preferred_device = 0);
bool init_command_pool();
bool init_descriptor_pool();
void create_vertex_buffer();
vk::UniquePipeline create_generic_pipeline();
Display *xlib_display;
Window xlib_window;
vk::UniqueInstance instance;
vk::PhysicalDevice physical_device;
vk::PhysicalDeviceProperties physical_device_props;
vk::UniqueSurfaceKHR surface;
uint32_t graphics_queue_family_index;
vk::CommandBuffer one_time_use_cmd;
};
} // namespace Vulkan

View File

@ -0,0 +1,2 @@
#include "vulkan/vulkan.hpp"
VULKAN_HPP_DEFAULT_DISPATCH_LOADER_DYNAMIC_STORAGE

View File

@ -0,0 +1,266 @@
#include "vulkan_pipeline_image.hpp"
#include "slang_helpers.hpp"
namespace Vulkan
{
PipelineImage::PipelineImage()
{
image_width = 0;
image_height = 0;
device = nullptr;
command_pool = nullptr;
allocator = nullptr;
queue = nullptr;
image = nullptr;
current_layout = vk::ImageLayout::eUndefined;
}
PipelineImage::~PipelineImage()
{
destroy();
}
void PipelineImage::init(vk::Device device_, vk::CommandPool command_, vk::Queue queue_, vma::Allocator allocator_)
{
device = device_;
command_pool = command_;
allocator = allocator_;
queue = queue_;
}
void PipelineImage::init(Context *context)
{
device = context->device;
command_pool = context->command_pool.get();
allocator = context->allocator;
queue = context->queue;
}
void PipelineImage::destroy()
{
if (!device || !allocator)
return;
if (image_width != 0 || image_height != 0)
{
framebuffer.reset();
device.destroyImageView(image_view);
device.destroyImageView(mipless_view);
allocator.destroyImage(image, image_allocation);
image_width = image_height = 0;
image_view = nullptr;
image = nullptr;
image_allocation = nullptr;
current_layout = vk::ImageLayout::eUndefined;
}
}
void PipelineImage::generate_mipmaps(vk::CommandBuffer cmd)
{
if (!mipmap)
return;
auto srr = [](unsigned int i) { return vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, i, 1, 0, 1); };
auto srl = [](unsigned int i) { return vk::ImageSubresourceLayers(vk::ImageAspectFlagBits::eColor, i, 0, 1); };
auto image_memory_barrier = vk::ImageMemoryBarrier{}.setImage(image);
auto mipmap_levels = mipmap ? mipmap_levels_for_size(image_width, image_height) : 1;
// Transition base layer to readable format.
image_memory_barrier
.setImage(image)
.setOldLayout(vk::ImageLayout::eShaderReadOnlyOptimal)
.setNewLayout(vk::ImageLayout::eTransferSrcOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eColorAttachmentWrite)
.setDstAccessMask(vk::AccessFlagBits::eTransferRead)
.setSubresourceRange(srr(0));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eAllGraphics,
vk::PipelineStageFlagBits::eTransfer,
{}, {}, {}, image_memory_barrier);
int base_width = image_width;
int base_height = image_height;
int base_level = 0;
for (; base_level + 1 < mipmap_levels; base_level++)
{
// Transition base layer to readable format.
if (base_level > 0)
{
image_memory_barrier
.setImage(image)
.setOldLayout(vk::ImageLayout::eTransferDstOptimal)
.setNewLayout(vk::ImageLayout::eTransferSrcOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite)
.setDstAccessMask(vk::AccessFlagBits::eTransferRead)
.setSubresourceRange(srr(base_level));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eTransfer,
{}, {}, {}, image_memory_barrier);
}
// Transition mipmap layer to writable
image_memory_barrier
.setImage(image)
.setOldLayout(vk::ImageLayout::eUndefined)
.setNewLayout(vk::ImageLayout::eTransferDstOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eNone)
.setDstAccessMask(vk::AccessFlagBits::eTransferWrite)
.setSubresourceRange(srr(base_level + 1));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eTransfer,
{}, {}, {}, image_memory_barrier);
// Blit base layer to mipmap layer
int mipmap_width = base_width >> 1;
int mipmap_height = base_height >> 1;
if (mipmap_width < 1)
mipmap_width = 1;
if (mipmap_height < 1)
mipmap_height = 1;
auto blit = vk::ImageBlit{}
.setSrcOffsets({ vk::Offset3D(0, 0, 0), vk::Offset3D(base_width, base_height, 1) })
.setDstOffsets({ vk::Offset3D(0, 0, 0), vk::Offset3D(mipmap_width, mipmap_height, 1) })
.setSrcSubresource(srl(base_level))
.setDstSubresource(srl(base_level + 1));
base_width = mipmap_width;
base_height = mipmap_height;
cmd.blitImage(image, vk::ImageLayout::eTransferSrcOptimal, image, vk::ImageLayout::eTransferDstOptimal, blit, vk::Filter::eLinear);
// Transition base layer to shader readable
image_memory_barrier
.setOldLayout(vk::ImageLayout::eTransferSrcOptimal)
.setNewLayout(vk::ImageLayout::eShaderReadOnlyOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite)
.setDstAccessMask(vk::AccessFlagBits::eShaderRead)
.setSubresourceRange(srr(base_level));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eFragmentShader,
{}, {}, {}, image_memory_barrier);
}
// Transition final layer to shader readable
image_memory_barrier
.setOldLayout(vk::ImageLayout::eTransferDstOptimal)
.setNewLayout(vk::ImageLayout::eShaderReadOnlyOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite)
.setDstAccessMask(vk::AccessFlagBits::eShaderRead)
.setSubresourceRange(srr(base_level));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eFragmentShader,
{}, {}, {}, image_memory_barrier);
}
void PipelineImage::barrier(vk::CommandBuffer cmd)
{
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eAllGraphics,
vk::PipelineStageFlagBits::eFragmentShader,
{}, {}, {}, {});
}
void PipelineImage::clear(vk::CommandBuffer cmd)
{
vk::ImageSubresourceRange subresource_range(vk::ImageAspectFlagBits::eColor, 0, VK_REMAINING_MIP_LEVELS, 0, 1);
auto image_memory_barrier = vk::ImageMemoryBarrier{}
.setImage(image)
.setSubresourceRange(subresource_range)
.setOldLayout(vk::ImageLayout::eUndefined)
.setNewLayout(vk::ImageLayout::eTransferDstOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite | vk::AccessFlagBits::eTransferRead)
.setDstAccessMask(vk::AccessFlagBits::eTransferWrite | vk::AccessFlagBits::eTransferRead);
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eTransfer,
{}, {}, {},
image_memory_barrier);
vk::ClearColorValue color{};
color.setFloat32({ 0.0f, 0.0f, 0.0f, 1.0f });
cmd.clearColorImage(image,
vk::ImageLayout::eTransferDstOptimal,
color,
subresource_range);
image_memory_barrier
.setOldLayout(vk::ImageLayout::eTransferDstOptimal)
.setNewLayout(vk::ImageLayout::eShaderReadOnlyOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite)
.setDstAccessMask(vk::AccessFlagBits::eShaderRead);
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eFragmentShader,
{}, {}, {},
image_memory_barrier);
current_layout = vk::ImageLayout::eShaderReadOnlyOptimal;
}
void PipelineImage::create(int width, int height, vk::Format fmt, vk::RenderPass renderpass, bool mipmap)
{
assert(width + height);
assert(device && allocator);
this->mipmap = mipmap;
int mipmap_levels = mipmap ? mipmap_levels_for_size(width, height): 1;
format = fmt;
auto allocation_create_info = vma::AllocationCreateInfo{}
.setUsage(vma::MemoryUsage::eAuto);
auto image_create_info = vk::ImageCreateInfo{}
.setUsage(vk::ImageUsageFlagBits::eTransferSrc | vk::ImageUsageFlagBits::eTransferDst | vk::ImageUsageFlagBits::eColorAttachment | vk::ImageUsageFlagBits::eSampled)
.setImageType(vk::ImageType::e2D)
.setExtent(vk::Extent3D(width, height, 1))
.setMipLevels(mipmap_levels)
.setArrayLayers(1)
.setFormat(format)
.setInitialLayout(vk::ImageLayout::eUndefined)
.setSamples(vk::SampleCountFlagBits::e1)
.setSharingMode(vk::SharingMode::eExclusive);
std::tie(image, image_allocation) = allocator.createImage(image_create_info, allocation_create_info);
auto subresource_range = vk::ImageSubresourceRange{}
.setAspectMask(vk::ImageAspectFlagBits::eColor)
.setBaseArrayLayer(0)
.setBaseMipLevel(0)
.setLayerCount(1)
.setLevelCount(mipmap_levels);
auto image_view_create_info = vk::ImageViewCreateInfo{}
.setImage(image)
.setViewType(vk::ImageViewType::e2D)
.setFormat(format)
.setComponents(vk::ComponentMapping())
.setSubresourceRange(subresource_range);
image_view = device.createImageView(image_view_create_info);
image_view_create_info.setSubresourceRange(vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, 0, 1, 0, 1));
mipless_view = device.createImageView(image_view_create_info);
image_width = width;
image_height = height;
auto framebuffer_create_info = vk::FramebufferCreateInfo{}
.setAttachments(mipless_view)
.setWidth(width)
.setHeight(height)
.setRenderPass(renderpass)
.setLayers(1);
framebuffer = device.createFramebufferUnique(framebuffer_create_info);
}
} // namespace Vulkan

View File

@ -0,0 +1,37 @@
#pragma once
#include "vulkan_context.hpp"
namespace Vulkan
{
struct PipelineImage
{
PipelineImage();
void init(vk::Device device, vk::CommandPool command, vk::Queue queue, vma::Allocator allocator);
void init(Vulkan::Context *context);
~PipelineImage();
void create(int width, int height, vk::Format fmt, vk::RenderPass renderpass, bool mipmap = false);
void destroy();
void barrier(vk::CommandBuffer cmd);
void generate_mipmaps(vk::CommandBuffer cmd);
void clear(vk::CommandBuffer cmd);
vk::ImageView image_view;
vk::ImageView mipless_view;
vk::Image image;
vk::Format format;
vk::UniqueFramebuffer framebuffer;
vma::Allocation image_allocation;
vk::ImageLayout current_layout;
int image_width;
int image_height;
bool mipmap;
vk::Device device;
vk::Queue queue;
vk::CommandPool command_pool;
vma::Allocator allocator;
};
} // namespace Vulkan

View File

@ -0,0 +1,589 @@
#include "vulkan_shader_chain.hpp"
#include "slang_helpers.hpp"
#include "fmt/format.h"
#include "stb_image.h"
#include "vulkan/vulkan_enums.hpp"
namespace Vulkan
{
ShaderChain::ShaderChain(Context *context_)
{
context = context_;
original_history_size = 3;
original_width = 0;
original_height = 0;
viewport_width = 0;
viewport_height = 0;
vertex_buffer = nullptr;
vertex_buffer_allocation = nullptr;
last_frame_index = 2;
current_frame_index = 0;
}
ShaderChain::~ShaderChain()
{
if (context && context->device)
{
if (vertex_buffer)
context->allocator.destroyBuffer(vertex_buffer, vertex_buffer_allocation);
vertex_buffer = nullptr;
vertex_buffer_allocation = nullptr;
}
pipelines.clear();
}
void ShaderChain::construct_buffer_objects()
{
for (size_t i = 0; i < pipelines.size(); i++)
{
auto &pipeline = *pipelines[i];
uint8_t *ubo_memory = nullptr;
if (pipeline.shader->ubo_size > 0)
ubo_memory = (uint8_t *)context->allocator.mapMemory(pipeline.uniform_buffer_allocation);
for (auto &uniform : pipeline.shader->uniforms)
{
void *location = 0;
const float MVP[16] = { 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
std::string block;
switch (uniform.block)
{
case SlangShader::Uniform::UBO:
location = &ubo_memory[uniform.offset];
block = "uniform";
break;
case SlangShader::Uniform::PushConstant:
location = &pipeline.push_constants[uniform.offset];
block = "push constant";
break;
}
auto write_size = [&location](float width, float height) {
std::array<float, 4> size;
size[0] = width;
size[1] = height;
size[2] = 1.0f / size[0];
size[3] = 1.0f / size[1];
memcpy(location, size.data(), sizeof(float) * 4);
};
switch (uniform.type)
{
case SlangShader::Uniform::PassSize:
case SlangShader::Uniform::PassFeedbackSize: // TODO: Does this need to differ?
if (uniform.specifier == -1)
{
write_size(original_width, original_height);
}
else
{
write_size(pipelines[uniform.specifier]->destination_width,
pipelines[uniform.specifier]->destination_height);
}
break;
case SlangShader::Uniform::ViewportSize:
write_size(viewport_width, viewport_height);
break;
case SlangShader::Uniform::PreviousFrameSize:
if (original.size() > 1)
write_size(original[1]->image_width, original[1]->image_height);
else
write_size(original_width, original_height);
break;
case SlangShader::Uniform::LutSize:
if (uniform.specifier < (int)lookup_textures.size())
write_size(lookup_textures[uniform.specifier]->image_width, lookup_textures[uniform.specifier]->image_height);
else
write_size(1.0f, 1.0f);
break;
case SlangShader::Uniform::MVP:
memcpy(location, MVP, sizeof(float) * 16);
break;
case SlangShader::Uniform::Parameter:
if (uniform.specifier < (int)preset->parameters.size())
memcpy(location, &preset->parameters[uniform.specifier].val, sizeof(float));
break;
case SlangShader::Uniform::FrameCount:
memcpy(location, &frame_count, sizeof(uint32_t));
break;
case SlangShader::Uniform::FrameDirection:
const int32_t frame_direction = 1;
memcpy(location, &frame_direction, sizeof(int32_t));
break;
}
}
if (pipeline.shader->ubo_size > 0)
{
context->allocator.unmapMemory(pipeline.uniform_buffer_allocation);
context->allocator.flushAllocation(pipeline.uniform_buffer_allocation, 0, pipeline.shader->ubo_size);
}
}
}
void ShaderChain::update_and_propagate_sizes(int original_width_new, int original_height_new, int viewport_width_new, int viewport_height_new)
{
if (pipelines.empty())
return;
if (original_width == original_width_new &&
original_height == original_height_new &&
viewport_width == viewport_width_new &&
viewport_height == viewport_height_new)
return;
original_width = original_width_new;
original_height = original_height_new;
viewport_width = viewport_width_new;
viewport_height = viewport_height_new;
for (size_t i = 0; i < pipelines.size(); i++)
{
auto &p = pipelines[i];
if (i != 0)
{
p->source_width = pipelines[i - 1]->destination_width;
p->source_height = pipelines[i - 1]->destination_height;
}
else
{
p->source_width = original_width_new;
p->source_height = original_height_new;
}
if (p->shader->scale_type_x == "viewport")
p->destination_width = viewport_width * p->shader->scale_x;
else if (p->shader->scale_type_x == "absolute")
p->destination_width = p->shader->scale_x;
else
p->destination_width = p->source_width * p->shader->scale_x;
if (p->shader->scale_type_y == "viewport")
p->destination_height = viewport_height * p->shader->scale_y;
else if (p->shader->scale_type_y == "absolute")
p->destination_height = p->shader->scale_y;
else
p->destination_height = p->source_height * p->shader->scale_y;
if (i == pipelines.size() - 1)
{
p->destination_width = viewport_width;
p->destination_height = viewport_height;
}
}
}
bool ShaderChain::load_shader_preset(std::string filename)
{
if (!ends_with(filename, ".slangp"))
fmt::print("Warning: loading preset without .slangp extension\n");
preset = std::make_unique<SlangPreset>();
if (!preset->load_preset_file(filename))
{
fmt::print("Couldn't load preset file: {}\n", filename);
return false;
}
if (!preset->introspect())
{
fmt::print("Failed introspection process in preset: {}\n", filename);
return false;
}
pipelines.clear();
pipelines.resize(preset->passes.size());
int num_ubos = 0;
int num_samplers = 0;
for (size_t i = 0; i < preset->passes.size(); i++)
{
auto &p = preset->passes[i];
pipelines[i] = std::make_unique<SlangPipeline>();
pipelines[i]->init(context, &p);
bool lastpass = (i == preset->passes.size() - 1);
if (!pipelines[i]->generate_pipeline(lastpass))
{
fmt::print("Couldn't create pipeline for shader: {}\n", p.filename);
return false;
}
for (auto &u : p.samplers)
if (u.type == SlangShader::Sampler::PreviousFrame)
if (u.specifier > (int)original_history_size)
original_history_size = u.specifier;
if (p.ubo_size)
num_ubos++;
if (p.samplers.size() > 0)
num_samplers += p.samplers.size();
}
std::array<vk::DescriptorPoolSize, 2> descriptor_pool_sizes;
descriptor_pool_sizes[0]
.setType(vk::DescriptorType::eUniformBuffer)
.setDescriptorCount(num_ubos * 3);
descriptor_pool_sizes[1]
.setType(vk::DescriptorType::eCombinedImageSampler)
.setDescriptorCount(num_samplers * 3);
auto descriptor_pool_create_info = vk::DescriptorPoolCreateInfo{}
.setPoolSizes(descriptor_pool_sizes)
.setMaxSets(pipelines.size() * 3)
.setFlags(vk::DescriptorPoolCreateFlagBits::eFreeDescriptorSet);
descriptor_pool = context->device.createDescriptorPoolUnique(descriptor_pool_create_info);
for (auto &p : pipelines)
p->generate_frame_resources(descriptor_pool.get());
load_lookup_textures();
float vertex_data[] = { -1.0f, -3.0f, 0.0f, 1.0f, /* texcoords */ 0.0, -1.0f,
3.0f, 1.0f, 0.0f, 1.0f, 2.0f, 1.0f,
-1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f };
auto buffer_create_info = vk::BufferCreateInfo{}
.setSize(sizeof(vertex_data))
.setUsage(vk::BufferUsageFlagBits::eVertexBuffer);
auto allocation_create_info = vma::AllocationCreateInfo{}
.setFlags(vma::AllocationCreateFlagBits::eHostAccessSequentialWrite)
.setRequiredFlags(vk::MemoryPropertyFlagBits::eHostVisible);
std::tie(vertex_buffer, vertex_buffer_allocation) = context->allocator.createBuffer(buffer_create_info, allocation_create_info);
auto vertex_buffer_memory = context->allocator.mapMemory(vertex_buffer_allocation);
memcpy(vertex_buffer_memory, vertex_data, sizeof(vertex_data));
context->allocator.unmapMemory(vertex_buffer_allocation);
context->allocator.flushAllocation(vertex_buffer_allocation, 0, sizeof(vertex_data));
frame_count = 0;
return true;
}
void ShaderChain::update_framebuffers(vk::CommandBuffer cmd, int frame_num)
{
size_t pass_count = pipelines.size() - 1;
if (preset->last_pass_uses_feedback)
pass_count++;
for (size_t i = 0; i < pass_count; i++)
{
bool mipmap = false;
if (i < pipelines.size() - 1)
mipmap = pipelines[i + 1]->shader->mipmap_input;
pipelines[i]->update_framebuffer(cmd, frame_num, mipmap);
}
}
void ShaderChain::update_descriptor_set(vk::CommandBuffer cmd, int pipe_num, int swapchain_index)
{
auto &pipe = *pipelines[pipe_num];
auto &frame = pipe.frame[swapchain_index];
if (pipe.shader->ubo_size > 0)
{
auto descriptor_buffer_info = vk::DescriptorBufferInfo{}
.setBuffer(pipe.uniform_buffer)
.setOffset(0)
.setRange(pipe.shader->ubo_size);
auto write_descriptor_set = vk::WriteDescriptorSet{}
.setDescriptorType(vk::DescriptorType::eUniformBuffer)
.setBufferInfo(descriptor_buffer_info)
.setDstBinding(pipe.shader->ubo_binding)
.setDstSet(frame.descriptor_set.get());
context->device.updateDescriptorSets(write_descriptor_set, {});
}
auto descriptor_image_info = vk::DescriptorImageInfo{}
.setImageLayout(vk::ImageLayout::eShaderReadOnlyOptimal);
for (auto &sampler : pipe.shader->samplers)
{
if (sampler.type == SlangShader::Sampler::Lut)
{
descriptor_image_info
.setImageView(lookup_textures[sampler.specifier]->image_view)
.setSampler(lookup_textures[sampler.specifier]->sampler);
}
else if (sampler.type == SlangShader::Sampler::PassFeedback)
{
assert(sampler.specifier < (int)pipelines.size());
assert(sampler.specifier >= 0);
if (!pipelines[sampler.specifier]->frame[last_frame_index].image.image)
update_framebuffers(cmd, last_frame_index);
auto &feedback_frame = pipelines[sampler.specifier]->frame[last_frame_index];
if (feedback_frame.image.current_layout == vk::ImageLayout::eUndefined)
feedback_frame.image.clear(cmd);
descriptor_image_info
.setImageView(pipelines[sampler.specifier]->frame[last_frame_index].image.image_view);
if (sampler.specifier == (int)pipelines.size() - 1)
descriptor_image_info.setSampler(pipelines[sampler.specifier]->sampler.get());
else
descriptor_image_info.setSampler(pipelines[sampler.specifier + 1]->sampler.get());;
}
else if (sampler.type == SlangShader::Sampler::Pass)
{
assert(sampler.specifier + 1 < (int)pipelines.size());
auto &sampler_to_use = pipelines[sampler.specifier + 1]->sampler.get();
if (sampler.specifier == -1)
{
descriptor_image_info
.setSampler(sampler_to_use)
.setImageView(original[0]->image_view);
}
else
{
descriptor_image_info
.setSampler(sampler_to_use)
.setImageView(pipelines[sampler.specifier]->frame[swapchain_index].image.image_view);
}
}
else if (sampler.type == SlangShader::Sampler::PreviousFrame)
{
int which_original = sampler.specifier;
if (which_original >= (int)original.size())
which_original = original.size() - 1;
assert(which_original > -1);
descriptor_image_info
.setSampler(pipelines[0]->sampler.get())
.setImageView(original[which_original]->image_view);
}
auto write_descriptor_set = vk::WriteDescriptorSet{}
.setDescriptorType(vk::DescriptorType::eCombinedImageSampler)
.setDstSet(frame.descriptor_set.get())
.setDstBinding(sampler.binding)
.setImageInfo(descriptor_image_info);
context->device.updateDescriptorSets(write_descriptor_set, {});
}
}
void ShaderChain::do_frame(uint8_t *data, int width, int height, int stride, vk::Format format, int viewport_x, int viewport_y, int viewport_width, int viewport_height)
{
if (!context->swapchain->begin_frame())
return;
auto cmd = context->swapchain->get_cmd();
current_frame_index = context->swapchain->get_current_frame();
update_and_propagate_sizes(width, height, viewport_width, viewport_height);
update_framebuffers(cmd, current_frame_index);
upload_original(cmd, data, width, height, stride, format);
construct_buffer_objects();
for (size_t i = 0; i < pipelines.size(); i++)
{
auto &pipe = *pipelines[i];
auto &frame = pipe.frame[current_frame_index];
update_descriptor_set(cmd, i, current_frame_index);
vk::ClearValue value{};
value.color = { 0.0f, 0.0f, 0.0f, 1.0f };
auto render_pass_begin_info = vk::RenderPassBeginInfo{}
.setRenderPass(pipe.render_pass.get())
.setFramebuffer(frame.image.framebuffer.get())
.setRenderArea(vk::Rect2D({}, vk::Extent2D(frame.image.image_width, frame.image.image_height)))
.setClearValues(value);
if (i == pipelines.size() - 1)
context->swapchain->begin_render_pass();
else
cmd.beginRenderPass(render_pass_begin_info, vk::SubpassContents::eInline);
cmd.bindPipeline(vk::PipelineBindPoint::eGraphics, pipe.pipeline.get());
cmd.bindDescriptorSets(vk::PipelineBindPoint::eGraphics, pipe.pipeline_layout.get(), 0, frame.descriptor_set.get(), {});
cmd.bindVertexBuffers(0, vertex_buffer, { 0 });
if (pipe.push_constants.size() > 0)
cmd.pushConstants(pipe.pipeline_layout.get(), vk::ShaderStageFlagBits::eAllGraphics, 0, pipe.push_constants.size(), pipe.push_constants.data());
if (i < pipelines.size() - 1)
{
cmd.setViewport(0, vk::Viewport(0, 0, pipe.destination_width, pipe.destination_height, 0.0f, 1.0f));
cmd.setScissor(0, vk::Rect2D({}, vk::Extent2D(pipe.destination_width, pipe.destination_height)));
}
else
{
cmd.setViewport(0, vk::Viewport(viewport_x, viewport_y, viewport_width, viewport_height, 0.0f, 1.0f));
cmd.setScissor(0, vk::Rect2D(vk::Offset2D(viewport_x, viewport_y), vk::Extent2D(viewport_width, viewport_height)));
}
cmd.draw(3, 1, 0, 0);
if (i < pipelines.size() - 1)
{
cmd.endRenderPass();
}
else
{
context->swapchain->end_render_pass();
}
frame.image.barrier(cmd);
if (i < pipelines.size() - 1)
frame.image.generate_mipmaps(cmd);
if (preset->last_pass_uses_feedback && i == pipelines.size() - 1)
{
std::array<vk::ImageMemoryBarrier, 2> image_memory_barrier{};
image_memory_barrier[0]
.setImage(frame.image.image)
.setOldLayout(vk::ImageLayout::eUndefined)
.setNewLayout(vk::ImageLayout::eTransferDstOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eColorAttachmentWrite)
.setDstAccessMask(vk::AccessFlagBits::eTransferWrite)
.setSubresourceRange(vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, 0, 1, 0, 1));
image_memory_barrier[1]
.setImage(context->swapchain->get_image())
.setOldLayout(vk::ImageLayout::ePresentSrcKHR)
.setNewLayout(vk::ImageLayout::eTransferSrcOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eColorAttachmentWrite)
.setDstAccessMask(vk::AccessFlagBits::eTransferRead)
.setSubresourceRange(vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, 0, 1, 0, 1));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eColorAttachmentOutput,
vk::PipelineStageFlagBits::eTransfer,
{}, {}, {}, image_memory_barrier);
auto image_blit = vk::ImageBlit{}
.setSrcOffsets({ vk::Offset3D(viewport_x, viewport_y, 0), vk::Offset3D(viewport_x + viewport_width, viewport_y + viewport_height, 1) })
.setDstOffsets({ vk::Offset3D(0, 0, 0), vk::Offset3D(viewport_width, viewport_height, 1) })
.setSrcSubresource(vk::ImageSubresourceLayers(vk::ImageAspectFlagBits::eColor, 0, 0, 1))
.setDstSubresource(vk::ImageSubresourceLayers(vk::ImageAspectFlagBits::eColor, 0, 0, 1));
cmd.blitImage(context->swapchain->get_image(), vk::ImageLayout::eTransferSrcOptimal, frame.image.image, vk::ImageLayout::eTransferDstOptimal, image_blit, vk::Filter::eNearest);
image_memory_barrier[0]
.setOldLayout(vk::ImageLayout::eTransferDstOptimal)
.setNewLayout(vk::ImageLayout::eShaderReadOnlyOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite)
.setDstAccessMask(vk::AccessFlagBits::eShaderRead)
.setSubresourceRange(vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, 0, 1, 0, 1));
image_memory_barrier[1]
.setOldLayout(vk::ImageLayout::eTransferSrcOptimal)
.setNewLayout(vk::ImageLayout::ePresentSrcKHR)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite)
.setDstAccessMask(vk::AccessFlagBits::eMemoryRead)
.setSubresourceRange(vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, 0, 1, 0, 1));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eAllGraphics,
{}, {}, {}, image_memory_barrier);
frame.image.current_layout = vk::ImageLayout::eTransferDstOptimal;
}
}
context->swapchain->end_frame();
last_frame_index = current_frame_index;
frame_count++;
}
void ShaderChain::upload_original(vk::CommandBuffer cmd, uint8_t *data, int width, int height, int stride, vk::Format format)
{
std::unique_ptr<Texture> texture;
auto create_texture = [&]() {
texture->create(width,
height,
format,
wrap_mode_from_string(pipelines[0]->shader->wrap_mode),
pipelines[0]->shader->filter_linear,
true);
};
if (original.size() > original_history_size)
{
texture = std::move(original.back());
original.pop_back();
if (texture->image_width != width || texture->image_height != height || texture->format != format)
{
texture->destroy();
create_texture();
}
}
else
{
texture = std::make_unique<Texture>();
texture->init(context);
create_texture();
}
if (cmd)
texture->from_buffer(cmd, data, width, height, stride);
else
texture->from_buffer(data, width, height, stride);
original.push_front(std::move(texture));
}
void ShaderChain::upload_original(uint8_t *data, int width, int height, int stride, vk::Format format)
{
upload_original(nullptr, data, width, height, stride, format);
}
bool ShaderChain::load_lookup_textures()
{
if (preset->textures.size() < 1)
return true;
lookup_textures.clear();
for (auto &l : preset->textures)
{
int width, height, channels;
stbi_uc *bytes = stbi_load(l.filename.c_str(), &width, &height, &channels, 4);
if (!bytes)
{
fmt::print("Couldn't load look-up texture: {}\n", l.filename);
return false;
}
auto wrap_mode = wrap_mode_from_string(l.wrap_mode);
lookup_textures.push_back(std::make_unique<Texture>());
auto &t = lookup_textures.back();
t->init(context);
t->create(width, height, vk::Format::eR8G8B8A8Unorm, wrap_mode, l.linear, l.mipmap);
t->from_buffer(bytes, width, height);
t->discard_staging_buffer();
free(bytes);
}
return true;
}
} // namespace Vulkan

View File

@ -0,0 +1,45 @@
#pragma once
#include "vulkan_context.hpp"
#include "slang_preset.hpp"
#include "vulkan_slang_pipeline.hpp"
#include "vulkan_texture.hpp"
#include <string>
#include <deque>
namespace Vulkan
{
class ShaderChain
{
public:
ShaderChain(Context *context_);
~ShaderChain();
bool load_shader_preset(std::string filename);
void update_and_propagate_sizes(int original_width_, int original_height_, int viewport_width_, int viewport_height_);
bool load_lookup_textures();
void do_frame(uint8_t *data, int width, int height, int stride, vk::Format format, int viewport_x, int viewport_y, int viewport_width, int viewport_height);
void upload_original(uint8_t *data, int width, int height, int stride, vk::Format format);
void upload_original(vk::CommandBuffer cmd, uint8_t *data, int width, int height, int stride, vk::Format format);
void construct_buffer_objects();
void update_framebuffers(vk::CommandBuffer cmd, int frame_num);
void update_descriptor_set(vk::CommandBuffer cmd, int pipe_num, int swapchain_index);
std::unique_ptr<SlangPreset> preset;
Context *context;
std::vector<std::unique_ptr<SlangPipeline>> pipelines;
size_t original_history_size;
uint32_t frame_count;
int original_width;
int original_height;
int viewport_width;
int viewport_height;
vk::UniqueDescriptorPool descriptor_pool;
std::vector<std::unique_ptr<Texture> > lookup_textures;
std::deque<std::unique_ptr<Texture> > original;
vk::Buffer vertex_buffer;
vma::Allocation vertex_buffer_allocation;
int current_frame_index;
int last_frame_index;
};
} // namespace Vulkan

View File

@ -0,0 +1,399 @@
#include "vulkan_slang_pipeline.hpp"
#include "slang_helpers.hpp"
#include "fmt/format.h"
#include <unordered_map>
namespace Vulkan
{
static VkFormat format_string_to_format(std::string target, VkFormat default_format)
{
struct
{
std::string string;
VkFormat format;
} formats[] = {
{ "R8_UNORM", VK_FORMAT_R8_UNORM },
{ "R8_UINT", VK_FORMAT_R8_UINT },
{ "R8_SINT", VK_FORMAT_R8_SINT },
{ "R8G8_UNORM", VK_FORMAT_R8G8_UNORM },
{ "R8G8_UINT", VK_FORMAT_R8G8_UINT },
{ "R8G8_SINT", VK_FORMAT_R8G8_SINT },
{ "R8G8B8A8_UNORM", VK_FORMAT_R8G8B8A8_UNORM },
{ "R8G8B8A8_UINT", VK_FORMAT_R8G8B8A8_UINT },
{ "R8G8B8A8_SINT", VK_FORMAT_R8G8B8A8_SINT },
{ "R8G8B8A8_SRGB", VK_FORMAT_R8G8B8A8_SRGB },
{ "R16_UINT", VK_FORMAT_R16_UINT },
{ "R16_SINT", VK_FORMAT_R16_SINT },
{ "R16_SFLOAT", VK_FORMAT_R16_SFLOAT },
{ "R16G16_UINT", VK_FORMAT_R16G16_UINT },
{ "R16G16_SINT", VK_FORMAT_R16G16_SINT },
{ "R16G16_SFLOAT", VK_FORMAT_R16G16_SFLOAT },
{ "R16G16B16A16_UINT", VK_FORMAT_R16G16B16A16_UINT },
{ "R16G16B16A16_SINT", VK_FORMAT_R16G16B16A16_SINT },
{ "R16G16B16A16_SFLOAT", VK_FORMAT_R16G16B16A16_SFLOAT },
{ "R32_UINT", VK_FORMAT_R32_UINT },
{ "R32_SINT", VK_FORMAT_R32_SINT },
{ "R32_SFLOAT", VK_FORMAT_R32_SFLOAT },
{ "R32G32_UINT", VK_FORMAT_R32G32_UINT },
{ "R32G32_SINT", VK_FORMAT_R32G32_SINT },
{ "R32G32_SFLOAT", VK_FORMAT_R32G32_SFLOAT },
{ "R32G32B32A32_UINT", VK_FORMAT_R32G32B32A32_UINT },
{ "R32G32B32A32_SINT", VK_FORMAT_R32G32B32A32_SINT },
{ "R32G32B32A32_SFLOAT", VK_FORMAT_R32G32B32A32_SFLOAT }
};
for (auto &f : formats)
{
if (f.string == target)
return f.format;
}
return default_format;
}
vk::SamplerAddressMode wrap_mode_from_string(std::string s)
{
if (s == "clamp_to_border")
return vk::SamplerAddressMode::eClampToBorder;
if (s == "repeat")
return vk::SamplerAddressMode::eRepeat;
if (s == "mirrored_repeat")
return vk::SamplerAddressMode::eMirroredRepeat;
return vk::SamplerAddressMode::eClampToBorder;
}
SlangPipeline::SlangPipeline()
{
device = nullptr;
shader = nullptr;
uniform_buffer = nullptr;
uniform_buffer_allocation = nullptr;
source_width = 0;
source_height = 0;
destination_width = 0;
destination_height = 0;
}
void SlangPipeline::init(Context *context_, SlangShader *shader_)
{
this->context = context_;
this->device = context->device;
this->shader = shader_;
}
SlangPipeline::~SlangPipeline()
{
device.waitIdle();
if (uniform_buffer)
{
context->allocator.destroyBuffer(uniform_buffer, uniform_buffer_allocation);
}
for (auto &f : frame)
{
if (f.descriptor_set)
{
f.descriptor_set.reset();
f.image.destroy();
}
}
pipeline.reset();
pipeline_layout.reset();
descriptor_set_layout.reset();
render_pass.reset();
}
bool SlangPipeline::generate_pipeline(bool lastpass)
{
VkFormat backup_format = VK_FORMAT_R8G8B8A8_UNORM;
if (shader->srgb_framebuffer)
backup_format = VK_FORMAT_R8G8B8A8_SRGB;
if (shader->float_framebuffer)
backup_format = VK_FORMAT_R32G32B32A32_SFLOAT;
this->format = vk::Format(format_string_to_format(shader->format, backup_format));
auto attachment_description = vk::AttachmentDescription{}
.setFormat(this->format)
.setSamples(vk::SampleCountFlagBits::e1)
.setLoadOp(vk::AttachmentLoadOp::eClear)
.setStoreOp(vk::AttachmentStoreOp::eStore)
.setStencilLoadOp(vk::AttachmentLoadOp::eDontCare)
.setStencilStoreOp(vk::AttachmentStoreOp::eStore)
.setInitialLayout(vk::ImageLayout::eUndefined)
.setFinalLayout(vk::ImageLayout::eShaderReadOnlyOptimal);
auto attachment_reference = vk::AttachmentReference{}
.setAttachment(0)
.setLayout(vk::ImageLayout::eColorAttachmentOptimal);
std::array<vk::SubpassDependency, 2> subpass_dependency;
subpass_dependency[0]
.setSrcSubpass(VK_SUBPASS_EXTERNAL)
.setDstSubpass(0)
.setSrcStageMask(vk::PipelineStageFlagBits::eColorAttachmentOutput)
.setSrcAccessMask(vk::AccessFlagBits(0))
.setDstStageMask(vk::PipelineStageFlagBits::eColorAttachmentOutput)
.setDstAccessMask(vk::AccessFlagBits::eColorAttachmentWrite);
subpass_dependency[1]
.setSrcSubpass(VK_SUBPASS_EXTERNAL)
.setDstSubpass(0)
.setSrcStageMask(vk::PipelineStageFlagBits::eColorAttachmentOutput)
.setSrcAccessMask(vk::AccessFlagBits::eColorAttachmentWrite)
.setDstStageMask(vk::PipelineStageFlagBits::eFragmentShader)
.setDstAccessMask(vk::AccessFlagBits::eShaderRead);
auto subpass_description = vk::SubpassDescription{}
.setColorAttachments(attachment_reference)
.setPipelineBindPoint(vk::PipelineBindPoint::eGraphics);
auto render_pass_create_info = vk::RenderPassCreateInfo{}
.setSubpasses(subpass_description)
.setDependencies(subpass_dependency)
.setAttachments(attachment_description);
render_pass = device.createRenderPassUnique(render_pass_create_info);
auto vertex_module = device.createShaderModuleUnique({ {}, shader->vertex_shader_spirv });
auto fragment_module = device.createShaderModuleUnique({ {}, shader->fragment_shader_spirv });
auto vertex_ci = vk::PipelineShaderStageCreateInfo{}
.setStage(vk::ShaderStageFlagBits::eVertex)
.setModule(vertex_module.get())
.setPName("main");
auto fragment_ci = vk::PipelineShaderStageCreateInfo{}
.setStage(vk::ShaderStageFlagBits::eFragment)
.setModule(fragment_module.get())
.setPName("main");
std::vector<vk::PipelineShaderStageCreateInfo> stages = { vertex_ci, fragment_ci };
auto vertex_input_binding_description = vk::VertexInputBindingDescription{}
.setBinding(0)
.setInputRate(vk::VertexInputRate::eVertex)
.setStride(sizeof(float) * 6);
// Position 4x float
std::array<vk::VertexInputAttributeDescription, 2> vertex_attribute_description{};
vertex_attribute_description[0]
.setBinding(0)
.setFormat(vk::Format::eR32G32B32A32Sfloat)
.setOffset(0)
.setLocation(0);
// texcoord 2x float
vertex_attribute_description[1]
.setBinding(0)
.setFormat(vk::Format::eR32G32Sfloat)
.setOffset(sizeof(float) * 4)
.setLocation(1);
auto vertex_input_info = vk::PipelineVertexInputStateCreateInfo{}
.setVertexBindingDescriptions(vertex_input_binding_description)
.setVertexAttributeDescriptions(vertex_attribute_description);
auto pipeline_input_assembly_info = vk::PipelineInputAssemblyStateCreateInfo{}
.setTopology(vk::PrimitiveTopology::eTriangleList)
.setPrimitiveRestartEnable(false);
std::vector<vk::Viewport> viewports(1);
viewports[0]
.setX(0.0f)
.setY(0.0f)
.setWidth(256)
.setHeight(256)
.setMinDepth(0.0f)
.setMaxDepth(1.0f);
std::vector<vk::Rect2D> scissors(1);
scissors[0].extent.width = 256;
scissors[0].extent.height = 256;
scissors[0].offset = vk::Offset2D(0, 0);
auto pipeline_viewport_info = vk::PipelineViewportStateCreateInfo{}
.setViewports(viewports)
.setScissors(scissors);
auto rasterizer_info = vk::PipelineRasterizationStateCreateInfo{}
.setCullMode(vk::CullModeFlagBits::eBack)
.setFrontFace(vk::FrontFace::eClockwise)
.setLineWidth(1.0f)
.setDepthClampEnable(false)
.setRasterizerDiscardEnable(false)
.setPolygonMode(vk::PolygonMode::eFill)
.setDepthBiasEnable(false)
.setRasterizerDiscardEnable(false);
auto multisample_info = vk::PipelineMultisampleStateCreateInfo{}
.setSampleShadingEnable(false)
.setRasterizationSamples(vk::SampleCountFlagBits::e1);
auto depth_stencil_info = vk::PipelineDepthStencilStateCreateInfo{}
.setDepthTestEnable(false);
auto blend_attachment_info = vk::PipelineColorBlendAttachmentState{}
.setColorWriteMask(vk::ColorComponentFlagBits::eB |
vk::ColorComponentFlagBits::eG |
vk::ColorComponentFlagBits::eR |
vk::ColorComponentFlagBits::eA)
.setBlendEnable(false)
.setColorBlendOp(vk::BlendOp::eAdd);
// .setSrcColorBlendFactor(vk::BlendFactor::eSrcAlpha)
// .setDstColorBlendFactor(vk::BlendFactor::eOneMinusSrcAlpha)
// .setAlphaBlendOp(vk::BlendOp::eAdd)
// .setSrcAlphaBlendFactor(vk::BlendFactor::eOne)
// .setSrcAlphaBlendFactor(vk::BlendFactor::eZero);
auto blend_state_info = vk::PipelineColorBlendStateCreateInfo{}
.setLogicOpEnable(false)
.setAttachments(blend_attachment_info);
std::vector<vk::DynamicState> states = { vk::DynamicState::eViewport, vk::DynamicState::eScissor };
vk::PipelineDynamicStateCreateInfo dynamic_state_info({}, states);
std::vector<vk::DescriptorSetLayoutBinding> descriptor_set_layout_bindings;
if (shader->ubo_size > 0)
{
vk::DescriptorSetLayoutBinding binding(
shader->ubo_binding,
vk::DescriptorType::eUniformBuffer,
1,
vk::ShaderStageFlagBits::eAllGraphics);
descriptor_set_layout_bindings.push_back(binding);
}
if (!shader->samplers.empty())
{
for (const auto &s : shader->samplers)
{
vk::DescriptorSetLayoutBinding binding(
s.binding,
vk::DescriptorType::eCombinedImageSampler,
1,
vk::ShaderStageFlagBits::eFragment);
descriptor_set_layout_bindings.push_back(binding);
}
}
auto dslci = vk::DescriptorSetLayoutCreateInfo{}
.setBindings(descriptor_set_layout_bindings);
descriptor_set_layout = device.createDescriptorSetLayoutUnique(dslci);
vk::PushConstantRange pcr(vk::ShaderStageFlagBits::eAllGraphics, 0, shader->push_constant_block_size);
auto pipeline_layout_info = vk::PipelineLayoutCreateInfo{}
.setSetLayoutCount(0)
.setSetLayouts(descriptor_set_layout.get());
if (shader->push_constant_block_size > 0)
pipeline_layout_info.setPushConstantRanges(pcr);
pipeline_layout = device.createPipelineLayoutUnique(pipeline_layout_info);
auto pipeline_create_info = vk::GraphicsPipelineCreateInfo{}
.setStageCount(2)
.setStages(stages)
.setPVertexInputState(&vertex_input_info)
.setPInputAssemblyState(&pipeline_input_assembly_info)
.setPViewportState(&pipeline_viewport_info)
.setPRasterizationState(&rasterizer_info)
.setPMultisampleState(&multisample_info)
.setPDepthStencilState(&depth_stencil_info)
.setPColorBlendState(&blend_state_info)
.setPDynamicState(&dynamic_state_info)
.setLayout(pipeline_layout.get())
.setRenderPass(render_pass.get())
.setSubpass(0);
if (lastpass)
pipeline_create_info.setRenderPass(context->swapchain->get_render_pass());
auto [result, pipeline] = device.createGraphicsPipelineUnique(nullptr, pipeline_create_info);
if (result != vk::Result::eSuccess)
{
fmt::print("Failed to create pipeline for shader: {}\n", shader->filename);
return false;
}
this->pipeline = std::move(pipeline);
return true;
}
void SlangPipeline::update_framebuffer(vk::CommandBuffer cmd, int num, bool mipmap)
{
auto &image = frame[num].image;
if (image.image_width != destination_width || image.image_height != destination_height)
{
image.destroy();
image.create(destination_width, destination_height, format, render_pass.get(), mipmap);
}
}
bool SlangPipeline::generate_frame_resources(vk::DescriptorPool pool)
{
for (auto &f : frame)
{
if (f.descriptor_set)
{
f.descriptor_set.reset();
f.image.destroy();
}
vk::DescriptorSetAllocateInfo descriptor_set_allocate_info(pool, descriptor_set_layout.get());
auto result = device.allocateDescriptorSetsUnique(descriptor_set_allocate_info);
f.descriptor_set = std::move(result[0]);
}
semaphore = device.createSemaphoreUnique({});
push_constants.resize(shader->push_constant_block_size);
if (shader->ubo_size > 0)
{
auto buffer_create_info = vk::BufferCreateInfo{}
.setSize(shader->ubo_size)
.setUsage(vk::BufferUsageFlagBits::eUniformBuffer);
auto allocation_create_info = vma::AllocationCreateInfo{}
.setFlags(vma::AllocationCreateFlagBits::eHostAccessSequentialWrite)
.setRequiredFlags(vk::MemoryPropertyFlagBits::eHostVisible);
std::tie(uniform_buffer, uniform_buffer_allocation) = context->allocator.createBuffer(buffer_create_info, allocation_create_info);
}
else
{
uniform_buffer = nullptr;
uniform_buffer_allocation = nullptr;
}
for (auto &f : frame)
f.image.init(context);
auto wrap_mode = wrap_mode_from_string(shader->wrap_mode);
auto filter = shader->filter_linear ? vk::Filter::eLinear : vk::Filter::eNearest;
auto sampler_create_info = vk::SamplerCreateInfo{}
.setAddressModeU(wrap_mode)
.setAddressModeV(wrap_mode)
.setAddressModeW(wrap_mode)
.setMagFilter(filter)
.setMinFilter(filter)
.setMipmapMode(vk::SamplerMipmapMode::eLinear)
.setMinLod(0.0f)
.setMaxLod(VK_LOD_CLAMP_NONE)
.setAnisotropyEnable(false);
sampler = device.createSamplerUnique(sampler_create_info);
return true;
}
} // namespace Vulkan

View File

@ -0,0 +1,50 @@
#pragma once
#include "vulkan/vulkan.hpp"
#include "slang_shader.hpp"
#include "vulkan_context.hpp"
#include "vulkan_pipeline_image.hpp"
namespace Vulkan
{
class SlangPipeline
{
public:
SlangPipeline();
void init(Context *context_, SlangShader *shader_);
~SlangPipeline();
bool generate_pipeline(bool lastpass = false);
bool generate_frame_resources(vk::DescriptorPool pool);
void update_framebuffer(vk::CommandBuffer, int frame_num, bool mipmap);
Context *context;
vk::Device device;
vk::Format format;
SlangShader *shader;
vk::UniqueRenderPass render_pass;
vk::UniqueDescriptorSetLayout descriptor_set_layout;
vk::UniquePipelineLayout pipeline_layout;
vk::UniquePipeline pipeline;
vk::UniqueSemaphore semaphore;
vk::UniqueSampler sampler;
struct
{
vk::UniqueDescriptorSet descriptor_set;
PipelineImage image;
} frame[3];
vk::Buffer uniform_buffer;
vma::Allocation uniform_buffer_allocation;
std::vector<uint8_t> push_constants;
int source_width;
int source_height;
int destination_width;
int destination_height;
};
vk::SamplerAddressMode wrap_mode_from_string(std::string s);
} // namespace Vulkan

300
vulkan/vulkan_swapchain.cpp Normal file
View File

@ -0,0 +1,300 @@
#include "vulkan_swapchain.hpp"
#include "vulkan/vulkan_structs.hpp"
namespace Vulkan
{
Swapchain::Swapchain(vk::Device device_, vk::PhysicalDevice physical_device_, vk::Queue queue_, vk::SurfaceKHR surface_, vk::CommandPool command_pool_)
: surface(surface_),
command_pool(command_pool_),
physical_device(physical_device_),
queue(queue_)
{
device = device_;
create_render_pass();
}
Swapchain::~Swapchain()
{
}
bool Swapchain::set_vsync(bool new_setting)
{
if (new_setting == vsync)
return false;
vsync = new_setting;
return true;
}
void Swapchain::create_render_pass()
{
auto attachment_description = vk::AttachmentDescription{}
.setFormat(vk::Format::eB8G8R8A8Unorm)
.setSamples(vk::SampleCountFlagBits::e1)
.setLoadOp(vk::AttachmentLoadOp::eClear)
.setStoreOp(vk::AttachmentStoreOp::eStore)
.setStencilLoadOp(vk::AttachmentLoadOp::eDontCare)
.setStencilStoreOp(vk::AttachmentStoreOp::eStore)
.setInitialLayout(vk::ImageLayout::eUndefined)
.setFinalLayout(vk::ImageLayout::ePresentSrcKHR);
auto attachment_reference = vk::AttachmentReference{}
.setAttachment(0)
.setLayout(vk::ImageLayout::eColorAttachmentOptimal);
std::array<vk::SubpassDependency, 2> subpass_dependency{};
subpass_dependency[0]
.setSrcSubpass(VK_SUBPASS_EXTERNAL)
.setDstSubpass(0)
.setSrcStageMask(vk::PipelineStageFlagBits::eColorAttachmentOutput)
.setSrcAccessMask(vk::AccessFlagBits(0))
.setDstStageMask(vk::PipelineStageFlagBits::eColorAttachmentOutput)
.setDstAccessMask(vk::AccessFlagBits::eColorAttachmentWrite);
subpass_dependency[1]
.setSrcSubpass(VK_SUBPASS_EXTERNAL)
.setDstSubpass(0)
.setSrcStageMask(vk::PipelineStageFlagBits::eColorAttachmentOutput)
.setSrcAccessMask(vk::AccessFlagBits::eColorAttachmentWrite)
.setDstStageMask(vk::PipelineStageFlagBits::eFragmentShader)
.setDstAccessMask(vk::AccessFlagBits::eShaderRead);
auto subpass_description = vk::SubpassDescription{}
.setColorAttachments(attachment_reference)
.setPipelineBindPoint(vk::PipelineBindPoint::eGraphics);
auto render_pass_create_info = vk::RenderPassCreateInfo{}
.setSubpasses(subpass_description)
.setDependencies(subpass_dependency)
.setAttachments(attachment_description);
render_pass = device.createRenderPassUnique(render_pass_create_info);
}
bool Swapchain::recreate()
{
device.waitIdle();
return create(num_frames);
}
vk::Image Swapchain::get_image()
{
return imageviewfbs[current_swapchain_image].image;
}
bool Swapchain::create(unsigned int desired_num_frames)
{
frames.clear();
imageviewfbs.clear();
auto surface_capabilities = physical_device.getSurfaceCapabilitiesKHR(surface);
if (surface_capabilities.minImageCount > desired_num_frames)
num_frames = surface_capabilities.minImageCount;
else
num_frames = desired_num_frames;
extents = surface_capabilities.currentExtent;
if (extents.width <= 1 || extents.height <= 1)
{
// Surface is likely hidden
printf("Extents too small.\n");
return false;
}
auto swapchain_create_info = vk::SwapchainCreateInfoKHR{}
.setMinImageCount(num_frames)
.setImageFormat(vk::Format::eB8G8R8A8Unorm)
.setImageExtent(extents)
.setImageColorSpace(vk::ColorSpaceKHR::eSrgbNonlinear)
.setImageSharingMode(vk::SharingMode::eExclusive)
.setImageUsage(vk::ImageUsageFlagBits::eColorAttachment | vk::ImageUsageFlagBits::eTransferSrc)
.setCompositeAlpha(vk::CompositeAlphaFlagBitsKHR::eOpaque)
.setClipped(true)
.setPresentMode(vsync ? vk::PresentModeKHR::eFifo : vk::PresentModeKHR::eImmediate)
.setSurface(surface)
.setImageArrayLayers(1);
if (swapchain)
swapchain_create_info.setOldSwapchain(swapchain.get());
swapchain = device.createSwapchainKHRUnique(swapchain_create_info);
auto swapchain_images = device.getSwapchainImagesKHR(swapchain.get());
vk::CommandBufferAllocateInfo command_buffer_allocate_info(command_pool, vk::CommandBufferLevel::ePrimary, num_frames);
auto command_buffers = device.allocateCommandBuffersUnique(command_buffer_allocate_info);
if (imageviewfbs.size() > num_frames)
num_frames = imageviewfbs.size();
frames.resize(num_frames);
imageviewfbs.resize(num_frames);
vk::FenceCreateInfo fence_create_info(vk::FenceCreateFlagBits::eSignaled);
for (unsigned int i = 0; i < num_frames; i++)
{
// Create frame queue resources
auto &frame = frames[i];
frame.command_buffer = std::move(command_buffers[i]);
frame.fence = device.createFenceUnique(fence_create_info);
frame.acquire = device.createSemaphoreUnique({});
frame.complete = device.createSemaphoreUnique({});
}
current_frame = 0;
for (unsigned int i = 0; i < num_frames; i++)
{
// Create resources associated with swapchain images
auto &image = imageviewfbs[i];
image.image = swapchain_images[i];
auto image_view_create_info = vk::ImageViewCreateInfo{}
.setImage(swapchain_images[i])
.setViewType(vk::ImageViewType::e2D)
.setFormat(vk::Format::eB8G8R8A8Unorm)
.setComponents(vk::ComponentMapping())
.setSubresourceRange(vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, 0, 1, 0, 1));
image.image_view = device.createImageViewUnique(image_view_create_info);
auto framebuffer_create_info = vk::FramebufferCreateInfo{}
.setAttachments(image.image_view.get())
.setWidth(extents.width)
.setHeight(extents.height)
.setLayers(1)
.setRenderPass(render_pass.get());
image.framebuffer = device.createFramebufferUnique(framebuffer_create_info);
}
current_swapchain_image = 0;
return true;
}
bool Swapchain::begin_frame()
{
if (extents.width < 1 || extents.height < 1)
{
printf ("Extents too small\n");
return false;
}
auto &frame = frames[current_frame];
auto result = device.waitForFences(frame.fence.get(), true, 33000000);
if (result != vk::Result::eSuccess)
{
printf("Failed fence\n");
return false;
}
auto result_value = device.acquireNextImageKHR(swapchain.get(), 33000000, frame.acquire.get());
if (result_value.result == vk::Result::eErrorOutOfDateKHR ||
result_value.result == vk::Result::eSuboptimalKHR)
{
recreate();
return begin_frame();
}
if (result_value.result != vk::Result::eSuccess)
{
printf ("Random failure %d\n", result_value.result);
return false;
}
device.resetFences(frame.fence.get());
current_swapchain_image = result_value.value;
vk::CommandBufferBeginInfo command_buffer_begin_info(vk::CommandBufferUsageFlags{vk::CommandBufferUsageFlagBits::eOneTimeSubmit});
frame.command_buffer->begin(command_buffer_begin_info);
return true;
}
bool Swapchain::end_frame()
{
auto &frame = frames[current_frame];
frame.command_buffer->end();
vk::PipelineStageFlags flags = vk::PipelineStageFlagBits::eColorAttachmentOutput;
vk::SubmitInfo submit_info(
frame.acquire.get(),
flags,
frame.command_buffer.get(),
frame.complete.get());
queue.submit(submit_info, frame.fence.get());
auto present_info = vk::PresentInfoKHR{}
.setWaitSemaphores(frames[current_frame].complete.get())
.setSwapchains(swapchain.get())
.setImageIndices(current_swapchain_image);
auto result = queue.presentKHR(present_info);
current_frame = (current_frame + 1) % num_frames;
if (result != vk::Result::eSuccess)
return false;
return true;
}
vk::Framebuffer Swapchain::get_framebuffer()
{
return imageviewfbs[current_swapchain_image].framebuffer.get();
}
vk::CommandBuffer &Swapchain::get_cmd()
{
return frames[current_frame].command_buffer.get();
}
void Swapchain::begin_render_pass()
{
vk::ClearColorValue colorval;
colorval.setFloat32({ 0.0f, 0.0f, 0.0f, 1.0f });
vk::ClearValue value;
value.setColor(colorval);
auto render_pass_begin_info = vk::RenderPassBeginInfo{}
.setRenderPass(render_pass.get())
.setFramebuffer(imageviewfbs[current_swapchain_image].framebuffer.get())
.setRenderArea(vk::Rect2D({}, extents))
.setClearValues(value);
get_cmd().beginRenderPass(render_pass_begin_info, vk::SubpassContents::eInline);
}
void Swapchain::end_render_pass()
{
get_cmd().endRenderPass();
}
unsigned int Swapchain::get_current_frame()
{
return current_frame;
}
bool Swapchain::wait_on_frame(int frame_num)
{
auto result = device.waitForFences(frames[frame_num].fence.get(), true, 33000000);
return (result == vk::Result::eSuccess);
}
vk::Extent2D Swapchain::get_extents()
{
return extents;
}
vk::RenderPass &Swapchain::get_render_pass()
{
return render_pass.get();
}
unsigned int Swapchain::get_num_frames()
{
return num_frames;
}
} // namespace Vulkan

View File

@ -0,0 +1,75 @@
#pragma once
#include "vulkan/vulkan.hpp"
#include "vulkan/vulkan_handles.hpp"
#include "vulkan/vulkan_structs.hpp"
namespace Vulkan
{
class Swapchain
{
public:
Swapchain(vk::Device device,
vk::PhysicalDevice physical_device,
vk::Queue queue,
vk::SurfaceKHR surface,
vk::CommandPool command_pool);
~Swapchain();
bool create(unsigned int num_frames);
bool recreate();
bool begin_frame();
void begin_render_pass();
void end_render_pass();
bool wait_on_frame(int frame_num);
bool end_frame();
// Returns true if vsync setting was changed, false if it was the same
bool set_vsync(bool on);
vk::Image get_image();
vk::Framebuffer get_framebuffer();
vk::CommandBuffer &get_cmd();
unsigned int get_current_frame();
vk::Extent2D get_extents();
vk::RenderPass &get_render_pass();
unsigned int get_num_frames();
private:
void create_render_pass();
struct Frame
{
vk::UniqueFence fence;
vk::UniqueSemaphore acquire;
vk::UniqueSemaphore complete;
vk::UniqueCommandBuffer command_buffer;
};
struct ImageViewFB
{
vk::Image image;
vk::UniqueImageView image_view;
vk::UniqueFramebuffer framebuffer;
};
vk::UniqueSwapchainKHR swapchain;
vk::Extent2D extents;
vk::UniqueRenderPass render_pass;
unsigned int current_frame = 0;
unsigned int current_swapchain_image = 0;
unsigned int num_frames = 0;
bool vsync = true;
std::vector<Frame> frames;
std::vector<ImageViewFB> imageviewfbs;
vk::Device device;
vk::SurfaceKHR surface;
vk::CommandPool command_pool;
vk::PhysicalDevice physical_device;
vk::Queue queue;
};
} // namespace Vulkan

307
vulkan/vulkan_texture.cpp Normal file
View File

@ -0,0 +1,307 @@
#include "vulkan_texture.hpp"
#include "vulkan/vulkan_enums.hpp"
#include "slang_helpers.hpp"
namespace Vulkan
{
Texture::Texture()
{
image_width = 0;
image_height = 0;
buffer_size = 0;
device = nullptr;
command_pool = nullptr;
allocator = nullptr;
queue = nullptr;
buffer = nullptr;
image = nullptr;
sampler = nullptr;
}
Texture::~Texture()
{
destroy();
}
void Texture::destroy()
{
if (!device || !allocator)
return;
if (sampler)
{
device.destroySampler(sampler);
sampler = nullptr;
}
if (image_width != 0 || image_height != 0)
{
device.destroyImageView(image_view);
allocator.destroyImage(image, image_allocation);
image_width = image_height = 0;
image_view = nullptr;
image = nullptr;
image_allocation = nullptr;
}
if (buffer_size != 0)
{
allocator.destroyBuffer(buffer, buffer_allocation);
buffer_size = 0;
buffer = nullptr;
buffer_allocation = nullptr;
}
}
void Texture::init(vk::Device device_, vk::CommandPool command_, vk::Queue queue_, vma::Allocator allocator_)
{
device = device_;
command_pool = command_;
allocator = allocator_;
queue = queue_;
}
void Texture::init(Context *context)
{
device = context->device;
command_pool = context->command_pool.get();
allocator = context->allocator;
queue = context->queue;
}
void Texture::from_buffer(vk::CommandBuffer cmd,
uint8_t *buffer,
int width,
int height,
int byte_stride)
{
if (image_width != width || image_height != height)
{
destroy();
create(width, height, format, wrap_mode, linear, mipmap);
}
int pixel_size = 4;
if (format == vk::Format::eR5G6B5UnormPack16)
pixel_size = 2;
if (byte_stride == 0)
{
byte_stride = pixel_size * width;
}
auto map = allocator.mapMemory(buffer_allocation);
for (int y = 0; y < height; y++)
{
auto src = buffer + byte_stride * y;
auto dst = (uint8_t *)map + width * pixel_size * y;
memcpy(dst, src, width * pixel_size);
}
allocator.unmapMemory(buffer_allocation);
allocator.flushAllocation(buffer_allocation, 0, width * height * pixel_size);
auto srr = [](unsigned int i) { return vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, i, 1, 0, 1); };
auto srl = [](unsigned int i) { return vk::ImageSubresourceLayers(vk::ImageAspectFlagBits::eColor, i, 0, 1); };
auto barrier = vk::ImageMemoryBarrier{}
.setImage(image)
.setOldLayout(vk::ImageLayout::eUndefined)
.setNewLayout(vk::ImageLayout::eTransferDstOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eNone)
.setDstAccessMask(vk::AccessFlagBits::eTransferWrite)
.setSubresourceRange(srr(0));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTopOfPipe,
vk::PipelineStageFlagBits::eTransfer,
{}, {}, {}, barrier);
auto buffer_image_copy = vk::BufferImageCopy{}
.setBufferOffset(0)
.setBufferRowLength(0)
.setBufferImageHeight(height)
.setImageExtent(vk::Extent3D(width, height, 1))
.setImageOffset(vk::Offset3D(0, 0, 0))
.setImageSubresource(srl(0));
cmd.copyBufferToImage(this->buffer, image, vk::ImageLayout::eTransferDstOptimal, buffer_image_copy);
auto mipmap_levels = mipmap ? mipmap_levels_for_size(image_width, image_height) : 1;
int base_width = image_width;
int base_height = image_height;
int base_level = 0;
for (; base_level + 1 < mipmap_levels; base_level++)
{
// Transition base layer to readable format.
barrier
.setImage(image)
.setOldLayout(vk::ImageLayout::eTransferDstOptimal)
.setNewLayout(vk::ImageLayout::eTransferSrcOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite)
.setDstAccessMask(vk::AccessFlagBits::eTransferRead)
.setSubresourceRange(srr(base_level));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eTransfer,
{}, {}, {}, barrier);
// Transition mipmap layer to writable
barrier
.setImage(image)
.setOldLayout(vk::ImageLayout::eUndefined)
.setNewLayout(vk::ImageLayout::eTransferDstOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferRead)
.setDstAccessMask(vk::AccessFlagBits::eTransferWrite)
.setSubresourceRange(srr(base_level + 1));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eTransfer,
{}, {}, {}, barrier);
// Blit base layer to mipmap layer
int mipmap_width = base_width >> 1;
int mipmap_height = base_height >> 1;
if (mipmap_width < 1)
mipmap_width = 1;
if (mipmap_height < 1)
mipmap_height = 1;
auto blit = vk::ImageBlit{}
.setSrcOffsets({ vk::Offset3D(0, 0, 0), vk::Offset3D(base_width, base_height, 1) })
.setDstOffsets({ vk::Offset3D(0, 0, 0), vk::Offset3D(mipmap_width, mipmap_height, 1)})
.setSrcSubresource(srl(base_level))
.setDstSubresource(srl(base_level + 1));
base_width = mipmap_width;
base_height = mipmap_height;
cmd.blitImage(image, vk::ImageLayout::eTransferSrcOptimal, image, vk::ImageLayout::eTransferDstOptimal, blit, vk::Filter::eLinear);
// Transition base layer to shader readable
barrier
.setOldLayout(vk::ImageLayout::eTransferSrcOptimal)
.setNewLayout(vk::ImageLayout::eShaderReadOnlyOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite)
.setDstAccessMask(vk::AccessFlagBits::eShaderRead)
.setSubresourceRange(srr(base_level));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eFragmentShader,
{}, {}, {}, barrier);
}
// Transition final layer to shader readable
barrier
.setOldLayout(vk::ImageLayout::eTransferDstOptimal)
.setNewLayout(vk::ImageLayout::eShaderReadOnlyOptimal)
.setSrcAccessMask(vk::AccessFlagBits::eTransferWrite)
.setDstAccessMask(vk::AccessFlagBits::eShaderRead)
.setSubresourceRange(srr(base_level));
cmd.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eFragmentShader,
{}, {}, {}, barrier);
}
void Texture::from_buffer(uint8_t *buffer, int width, int height, int byte_stride)
{
vk::CommandBufferAllocateInfo cbai(command_pool, vk::CommandBufferLevel::ePrimary, 1);
auto command_buffer_vector = device.allocateCommandBuffersUnique(cbai);
auto &cmd = command_buffer_vector[0];
cmd->begin({ vk::CommandBufferUsageFlagBits::eOneTimeSubmit });
from_buffer(cmd.get(), buffer, width, height, byte_stride);
cmd->end();
vk::SubmitInfo si{};
si.setCommandBuffers(cmd.get());
queue.submit(si);
queue.waitIdle();
}
void Texture::create(int width, int height, vk::Format fmt, vk::SamplerAddressMode wrap_mode, bool linear, bool mipmap)
{
assert(image_width + image_height + buffer_size == 0);
this->mipmap = mipmap;
this->wrap_mode = wrap_mode;
this->linear = linear;
int mipmap_levels = mipmap ? mipmap_levels_for_size(width, height) : 1;
format = fmt;
auto aci = vma::AllocationCreateInfo{}
.setUsage(vma::MemoryUsage::eAuto);
auto ici = vk::ImageCreateInfo{}
.setUsage(vk::ImageUsageFlagBits::eTransferSrc | vk::ImageUsageFlagBits::eTransferDst | vk::ImageUsageFlagBits::eSampled)
.setImageType(vk::ImageType::e2D)
.setExtent(vk::Extent3D(width, height, 1))
.setMipLevels(mipmap_levels)
.setArrayLayers(1)
.setFormat(format)
.setInitialLayout(vk::ImageLayout::eUndefined)
.setSamples(vk::SampleCountFlagBits::e1)
.setSharingMode(vk::SharingMode::eExclusive);
std::tie(image, image_allocation) = allocator.createImage(ici, aci);
buffer_size = width * height * 4;
if (format == vk::Format::eR5G6B5UnormPack16)
buffer_size = width * height * 2;
auto bci = vk::BufferCreateInfo{}
.setSize(buffer_size)
.setUsage(vk::BufferUsageFlagBits::eTransferSrc);
aci.setRequiredFlags(vk::MemoryPropertyFlagBits::eHostVisible)
.setFlags(vma::AllocationCreateFlagBits::eHostAccessSequentialWrite);
std::tie(buffer, buffer_allocation) = allocator.createBuffer(bci, aci);
auto isrr = vk::ImageSubresourceRange{}
.setAspectMask(vk::ImageAspectFlagBits::eColor)
.setBaseArrayLayer(0)
.setBaseMipLevel(0)
.setLayerCount(1)
.setLevelCount(mipmap_levels);
auto ivci = vk::ImageViewCreateInfo{}
.setImage(image)
.setViewType(vk::ImageViewType::e2D)
.setFormat(format)
.setComponents(vk::ComponentMapping())
.setSubresourceRange(isrr);
image_view = device.createImageView(ivci);
image_width = width;
image_height = height;
auto sampler_create_info = vk::SamplerCreateInfo{}
.setMagFilter(vk::Filter::eNearest)
.setMinFilter(vk::Filter::eNearest)
.setAddressModeU(wrap_mode)
.setAddressModeV(wrap_mode)
.setAddressModeW(wrap_mode)
.setBorderColor(vk::BorderColor::eFloatOpaqueBlack);
if (linear)
sampler_create_info
.setMagFilter(vk::Filter::eLinear)
.setMinFilter(vk::Filter::eLinear);
if (mipmap)
sampler_create_info
.setMinLod(0.0f)
.setMaxLod(10000.0f)
.setMipmapMode(vk::SamplerMipmapMode::eLinear);
sampler = device.createSampler(sampler_create_info);
}
void Texture::discard_staging_buffer()
{
if (buffer_size != 0)
{
allocator.destroyBuffer(buffer, buffer_allocation);
buffer_size = 0;
}
}
} // namespace Vulkan

43
vulkan/vulkan_texture.hpp Normal file
View File

@ -0,0 +1,43 @@
#pragma once
#include "vk_mem_alloc.hpp"
#include "vulkan_context.hpp"
namespace Vulkan
{
struct Texture
{
Texture();
void init(vk::Device device, vk::CommandPool command, vk::Queue queue, vma::Allocator allocator);
void init(Vulkan::Context *context);
~Texture();
void create(int width, int height, vk::Format fmt, vk::SamplerAddressMode wrap_mode, bool linear, bool mipmap);
void destroy();
void from_buffer(vk::CommandBuffer cmd, uint8_t *buffer, int width, int height, int byte_stride = 0);
void from_buffer(uint8_t *buffer, int width, int height, int byte_stride = 0);
void discard_staging_buffer();
vk::Sampler sampler;
vk::ImageView image_view;
vk::Image image;
vk::Format format;
vk::SamplerAddressMode wrap_mode;
vma::Allocation image_allocation;
int image_width;
int image_height;
bool mipmap;
bool linear;
vk::Buffer buffer;
vma::Allocation buffer_allocation;
size_t buffer_size;
vk::Device device;
vk::Queue queue;
vk::CommandPool command_pool;
vma::Allocator allocator;
};
} // namespace Vulkan