summaryrefslogtreecommitdiffstats
path: root/src/video_core/renderer_vulkan/vk_stream_buffer.h (unfollow)
Commit message (Collapse)AuthorFilesLines
2021-07-26vk_stream_buffer: Remove unused stream bufferReinUsesLisp1-76/+0
Remove unused file.
2021-01-03renderer_vulkan: Rename VKDevice to DeviceReinUsesLisp1-3/+3
The "VK" prefix predates the "Vulkan" namespace. It was carried around the codebase for consistency. "VKDevice" currently is a bad alias with "VkDevice" (only an upcase character of difference) that can cause confusion. Rename all instances of it.
2020-12-31vulkan_common: Rename renderer_vulkan/wrapper.h to vulkan_common/vulkan_wrapper.hReinUsesLisp1-1/+1
Allows sharing Vulkan wrapper code between different rendering backends.
2020-12-30video_core: Rewrite the texture cacheReinUsesLisp1-7/+5
The current texture cache has several points that hurt maintainability and performance. It's easy to break unrelated parts of the cache when doing minor changes. The cache can easily forget valuable information about the cached textures by CPU writes or simply by its normal usage.The current texture cache has several points that hurt maintainability and performance. It's easy to break unrelated parts of the cache when doing minor changes. The cache can easily forget valuable information about the cached textures by CPU writes or simply by its normal usage. This commit aims to address those issues.
2020-09-19renderer_vulkan: Make unconditional use of VK_KHR_timeline_semaphoreReinUsesLisp1-3/+2
This reworks how host<->device synchronization works on the Vulkan backend. Instead of "protecting" resources with a fence and signalling these as free when the fence is known to be signalled by the host GPU, use timeline semaphores. Vulkan timeline semaphores allow use to work on a subset of D3D12 fences. As far as we are concerned, timeline semaphores are a value set by the host or the device that can be waited by either of them. Taking advantange of this, we can have a monolithically increasing atomic value for each submission to the graphics queue. Instead of protecting resources with a fence, we simply store the current logical tick (the atomic value stored in CPU memory). When we want to know if a resource is free, it can be compared to the current GPU tick. This greatly simplifies resource management code and the free status of resources should have less false negatives. To workaround bugs in validation layers, when these are attached there's a thread waiting for timeline semaphores.
2020-06-24gl_buffer_cache: Mark buffers as residentReinUsesLisp1-1/+5
Make stream buffer and cached buffers as resident and query their address. This allows us to use GPU addresses for several proprietary Nvidia extensions.
2020-06-09buffer_cache: Avoid passing references of shared pointers and misc style changesReinUsesLisp1-1/+1
Instead of using as template argument a shared pointer, use the underlying type and manage shared pointers explicitly. This can make removing shared pointers from the cache more easy. While we are at it, make some misc style changes and general improvements (like insert_or_assign instead of operator[] + operator=).
2020-04-17vk_stream_buffer: Fix out of memory on boot on recent Nvidia driversReinUsesLisp1-2/+3
Nvidia recently introduced a new memory type for data streaming (awesome!), but yuzu was assuming that all heaps had enough memory for the assumed stream buffer size (256 MiB). This worked fine on AMD but Nvidia's new memory heap was smaller than 256 MiB. This commit changes this assumption and allocates a bit less than the size of the preferred heap, with a maximum of 256 MiB (to avoid allocating all system memory on integrated devices). - Fixes a crash on NVIDIA 450.82.0.0
2020-04-11renderer_vulkan: Drop Vulkan-HppReinUsesLisp1-10/+8
2020-01-06vk_stream_buffer/vk_buffer_cache: Avoid halting and use generic cacheReinUsesLisp1-20/+24
The stream buffer before this commit once it was full (no more bytes to write before looping) waiting for all previous operations to finish. This was a temporary solution and had a noticeable performance penalty in performance (from what a profiler showed). To avoid this mark with fences usages of the stream buffer and once it loops wait for them to be signaled. On average this will never wait. Each fence knows where its usage finishes, resulting in a non-paged stream buffer. On the other side, the buffer cache is reimplemented using the generic buffer cache. It makes use of the staging buffer pool and the new stream buffer.
2019-07-07vk_scheduler: Drop execution context in favor of viewsReinUsesLisp1-1/+1
Instead of passing by copy an execution context through out the whole Vulkan call hierarchy, use a command buffer view and fence view approach. This internally dereferences the command buffer or fence forcing the user to be unable to use an outdated version of it on normal usage. It is still possible to keep store an outdated if it is casted to VKFence& or vk::CommandBuffer. While changing this file, add an extra parameter for Flush and Finish to allow releasing the fence from this calls.
2019-02-26vk_stream_buffer: Remove copy code pathReinUsesLisp1-10/+9
2019-02-24vk_stream_buffer: Implement a stream bufferReinUsesLisp1-0/+73
This manages two kinds of streaming buffers: one for unified memory models and one for dedicated GPUs. The first one skips the copy from the staging buffer to the real buffer, since it creates an unified buffer. This implementation waits for all fences to finish their operation before "invalidating". This is suboptimal since it should allocate another buffer or start searching from the beginning. There is room for improvement here. This could also handle AMD's "pinned" memory (a heap with 256 MiB) that seems to be designed for buffer streaming.