summaryrefslogtreecommitdiffstats
path: root/src/core/memory.cpp (follow)
Commit message (Collapse)AuthorAgeFilesLines
* ARM/Memory: Correct Exclusive Monitor and Implement Exclusive Memory Writes.Fernando Sahmkow2020-06-271-0/+98
|
* General: Recover Prometheus project from harddrive failure Fernando Sahmkow2020-06-271-7/+4
| | | | | | | This commit: Implements CPU Interrupts, Replaces Cycle Timing for Host Timing, Reworks the Kernel's Scheduler, Introduce Idle State and Suspended State, Recreates the bootmanager, Initializes Multicore system.
* core: memory: Fix memory access on page boundaries.bunnei2020-04-171-6/+39
| | | | - Fixes Super Smash Bros. Ultimate.
* core: memory: Updates for new VMM.bunnei2020-04-171-100/+52
|
* core: memory: Move to Core::Memory namespace.bunnei2020-04-171-2/+2
| | | | - helpful to disambiguate Kernel::Memory namespace.
* Buffer Cache: Use vAddr instead of physical memory.Fernando Sahmkow2020-04-061-0/+115
|
* GPU: Setup Flush/Invalidate to use VAddr instead of CacheAddrFernando Sahmkow2020-04-061-6/+6
|
* core/memory: Create a special MapMemoryRegion for physical memory.Markus Wick2020-01-181-0/+11
| | | | This allows us to create a fastmem arena within the memory.cpp helpers.
* core/memory + arm/dynarmic: Use a global offset within our arm page table.Markus Wick2020-01-011-9/+16
| | | | | | This saves us two x64 instructions per load/store instruction. TODO: Clean up our memory code. We can use this optimization here as well.
* core/memory; Migrate over SetCurrentPageTable() to the Memory classLioncash2019-11-271-15/+16
| | | | | | | Now that literally every other API function is converted over to the Memory class, we can just move the file-local page table into the Memory implementation class, finally getting rid of global state within the memory code.
* core/memory: Migrate over GetPointerFromVMA() to the Memory classLioncash2019-11-271-36/+36
| | | | | | | | Now that everything else is migrated over, this is essentially just code relocation and conversion of a global accessor to the class member variable. All that remains is to migrate over the page table.
* core/memory: Migrate over Write{8, 16, 32, 64, Block} to the Memory classLioncash2019-11-271-92/+128
| | | | | | | | | The Write functions are used slightly less than the Read functions, which make these a bit nicer to move over. The only adjustments we really need to make here are to Dynarmic's exclusive monitor instance. We need to keep a reference to the currently active memory instance to perform exclusive read/write operations.
* core/memory: Migrate over Read{8, 16, 32, 64, Block} to the Memory classLioncash2019-11-271-96/+132
| | | | | | | | | | | | | | With all of the trivial parts of the memory interface moved over, we can get right into moving over the bits that are used. Note that this does require the use of GetInstance from the global system instance to be used within hle_ipc.cpp and the gdbstub. This is fine for the time being, as they both already rely on the global system instance in other functions. These will be removed in a change directed at both of these respectively. For now, it's sufficient, as it still accomplishes the goal of de-globalizing the memory code.
* core/memory: Migrate over ZeroBlock() and CopyBlock() to the Memory classLioncash2019-11-271-89/+110
| | | | | These currently aren't used anywhere in the codebase, so these are very trivial to move over to the Memory class.
* core/memory: Migrate over RasterizerMarkRegionCached() to the Memory classLioncash2019-11-271-63/+67
| | | | | This is only used within the accelerated rasterizer in two places, so this is also a very trivial migration.
* core/memory: Migrate over ReadCString() to the Memory classLioncash2019-11-271-14/+19
| | | | | This only had one usage spot, so this is fairly straightforward to convert over.
* core/memory: Migrate over GetPointer()Lioncash2019-11-271-15/+23
| | | | | With all of the interfaces ready for migration, it's trivial to migrate over GetPointer().
* core/memory: Move memory read/write implementation functions into an anonymous namespaceLioncash2019-11-271-97/+98
| | | | | | These will eventually be migrated into the main Memory class, but for now, we put them in an anonymous namespace, so that the other functions that use them, can be migrated over separately.
* core/memory: Migrate over address checking functions to the new Memory classLioncash2019-11-271-20/+31
| | | | | | | | | A fairly straightforward migration. These member functions can just be mostly moved verbatim with minor changes. We already have the necessary plumbing in places that they're used. IsKernelVirtualAddress() can remain a non-member function, since it doesn't rely on class state in any form.
* core/memory: Migrate over memory mapping functions to the new Memory classLioncash2019-11-271-71/+106
| | | | | | Migrates all of the direct mapping facilities over to the new memory class. In the process, this also obsoletes the need for memory_setup.h, so we can remove it entirely from the project.
* core/memory: Introduce skeleton of Memory classLioncash2019-11-271-0/+12
| | | | | | | | | Currently, the main memory management code is one of the remaining places where we have global state. The next series of changes will aim to rectify this. This change simply introduces the main skeleton of the class that will contain all the necessary state.
* core: Remove Core::CurrentProcess()Lioncash2019-10-061-5/+5
| | | | | | This only encourages the use of the global system instance (which will be phased out long-term). Instead, we use the direct system function call directly to remove the appealing but discouraged short-hand.
* Core/Memory: Only FlushAndInvalidate GPU if the page is marked as RasterizerCachedMemoryFernando Sahmkow2019-09-191-2/+7
| | | | | This commit avoids Invalidating and Flushing the GPU if the page is not marked as a RasterizerCache Page.
* memory: Remove unused includesLioncash2019-07-061-2/+0
| | | | | These aren't used within the central memory management code, so they can be removed.
* core/cpu_core_manager: Create threads separately from initialization.Lioncash2019-04-121-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our initialization process is a little wonky than one would expect when it comes to code flow. We initialize the CPU last, as opposed to hardware, where the CPU obviously needs to be first, otherwise nothing else would work, and we have code that adds checks to get around this. For example, in the page table setting code, we check to see if the system is turned on before we even notify the CPU instances of a page table switch. This results in dead code (at the moment), because the only time a page table switch will occur is when the system is *not* running, preventing the emulated CPU instances from being notified of a page table switch in a convenient manner (technically the code path could be taken, but we don't emulate the process creation svc handlers yet). This moves the threads creation into its own member function of the core manager and restores a little order (and predictability) to our initialization process. Previously, in the multi-threaded cases, we'd kick off several threads before even the main kernel process was created and ready to execute (gross!). Now the initialization process is like so: Initialization: 1. Timers 2. CPU 3. Kernel 4. Filesystem stuff (kind of gross, but can be amended trivially) 5. Applet stuff (ditto in terms of being kind of gross) 6. Main process (will be moved into the loading step in a following change) 7. Telemetry (this should be initialized last in the future). 8. Services (4 and 5 should ideally be alongside this). 9. GDB (gross. Uses namespace scope state. Needs to be refactored into a class or booted altogether). 10. Renderer 11. GPU (will also have its threads created in a separate step in a following change). Which... isn't *ideal* per-se, however getting rid of the wonky intertwining of CPU state initialization out of this mix gets rid of most of the footguns when it comes to our initialization process.
* core/memory: Remove GetCurrentPageTable()Lioncash2019-04-071-4/+0
| | | | | Now that nothing actually touches the internal page table aside from the memory subsystem itself, we can remove the accessor to it.
* memory: Check that core is powered on before attempting to use GPU.bunnei2019-03-211-1/+1
| | | | | - GPU will be released on shutdown, before pages are unmapped. - On subsequent runs, current_page_table will be not nullptr, but GPU might not be valid yet.
* core: Move PageTable struct into Common.bunnei2019-03-171-74/+60
|
* memory: Simplify rasterizer cache operations.bunnei2019-03-161-60/+21
|
* gpu: Use host address for caching instead of guest address.bunnei2019-03-151-5/+8
|
* gpu: Move command processing to another thread.bunnei2019-03-071-4/+4
|
* Memory: don't lock hle mutex in memory read/writeWeiyi Wang2019-03-021-6/+0
| | | | The comment already invalidates itself: neither MMIO nor rasterizer cache belongsHLE kernel state. This mutex has a too large scope if MMIO or cache is included, which is prone to dead lock when multiple thread acquires these resource at the same time. If necessary, each MMIO component or rasterizer should have their own lock.
* Speed up memory page mapping (#2141)Annomatg2019-02-271-6/+11
| | | | - Memory::MapPages total samplecount was reduced from 4.6% to 1.06%. - From main menu into the game from 1.03% to 0.35%
* Fixed uninitialized memory due to missing returns in canaryDavid Marcec2018-12-191-0/+1
| | | | Functions which are suppose to crash on non canary builds usually don't return anything which lead to uninitialized memory being used.
* memory: Convert ASSERT into a DEBUG_ASSERT within GetPointerFromVMA()Lioncash2018-12-061-1/+1
| | | | | | Given memory should always be expected to be valid during normal execution, this should be a debug assertion, rather than a check in regular builds.
* vm_manager: Make vma_map privateLioncash2018-12-061-6/+5
| | | | | | | | | | | This was only ever public so that code could check whether or not a handle was valid or not. Instead of exposing the object directly and allowing external code to potentially mess with the map contents, we just provide a member function that allows checking whether or not a handle is valid. This makes all member variables of the VMManager class private except for the page table.
* Call shrink_to_fit after page-table vector resizing to cause crt to actually lower vector capacity. For 36-bit titles saves 800MB of commit.heapo2018-12-051-0/+8
|
* global: Use std::optional instead of boost::optional (#1578)Frederic L2018-10-301-1/+1
| | | | | | | | | | | | | | | | * get rid of boost::optional * Remove optional references * Use std::reference_wrapper for optional references * Fix clang format * Fix clang format part 2 * Adressed feedback * Fix clang format and MacOS build
* kernel/process: Make data member variables privateLioncash2018-09-301-7/+7
| | | | | | | Makes the public interface consistent in terms of how accesses are done on a process object. It also makes it slightly nicer to reason about the logic of the process class, as we don't want to expose everything to external code.
* memory: Dehardcode the use of fixed memory range constantsLioncash2018-09-251-5/+7
| | | | | | | | The locations of these can actually vary depending on the address space layout, so we shouldn't be using these when determining where to map memory or be using them as offsets for calculations. This keeps all the memory ranges flexible and malleable based off of the virtual memory manager instance state.
* memory: Dehardcode the use of a 36-bit address spaceLioncash2018-09-251-2/+16
| | | | | Given games can also request a 32-bit or 39-bit address space, we shouldn't be hardcoding the address space range as 36-bit.
* Port #4182 from Citra: "Prefix all size_t with std::"fearlessTobi2018-09-151-27/+28
|
* gl_renderer: Cache textures, framebuffers, and shaders based on CPU address.bunnei2018-08-311-36/+15
|
* gpu: Make memory_manager privateLioncash2018-08-281-2/+2
| | | | | | | | | | Makes the class interface consistent and provides accessors for obtaining a reference to the memory manager instance. Given we also return references, this makes our more flimsy uses of const apparent, given const doesn't propagate through pointers in the way one would typically expect. This makes our mutable state more apparent in some places.
* renderer_base: Make Rasterizer() return the rasterizer by referenceLioncash2018-08-041-4/+4
| | | | | | | All calling code assumes that the rasterizer will be in a valid state, which is a totally fine assumption. The only way the rasterizer wouldn't be is if initialization is done incorrectly or fails, which is checked against in System::Init().
* video_core: Eliminate the g_renderer global variableLioncash2018-08-041-8/+10
| | | | | | | | | | | | | | We move the initialization of the renderer to the core class, while keeping the creation of it and any other specifics in video_core. This way we can ensure that the renderer is initialized and doesn't give unfettered access to the renderer. This also makes dependencies on types more explicit. For example, the GPU class doesn't need to depend on the existence of a renderer, it only needs to care about whether or not it has a rasterizer, but since it was accessing the global variable, it was also making the renderer a part of its dependency chain. By adjusting the interface, we can get rid of this dependency.
* memory: Remove unused GetSpecialHandlers() functionLioncash2018-08-031-16/+0
| | | | This is just unused code, so we may as well get rid of it.
* core/memory: Get rid of 3DS leftoversLioncash2018-08-031-106/+0
| | | | Removes leftover code from citra that isn't needed.
* Merge pull request #690 from lioncash/movebunnei2018-07-191-3/+5
|\ | | | | core/memory, core/hle/kernel: Use std::move where applicable
| * core/memory, core/hle/kernel: Use std::move where applicableLioncash2018-07-191-3/+5
| | | | | | | | Avoids pointless copies
* | core/memory: Remove unused function GetSpecialHandlers() and an unused variable in ZeroBlock()Lioncash2018-07-191-7/+0
|/
* Update clang formatJames Rowe2018-07-031-12/+12
|
* Rename logging macro back to LOG_*James Rowe2018-07-031-12/+12
|
* Kernel/Arbiters: Fix casts, cleanup comments/magic numbersMichael Scire2018-06-221-0/+4
|
* core: Implement multicore support.bunnei2018-05-111-2/+7
|
* general: Make formatting of logged hex values more straightforwardLioncash2018-05-021-11/+11
| | | | | | This makes the formatting expectations more obvious (e.g. any zero padding specified is padding that's entirely dedicated to the value being printed, not any pretty-printing that also gets tacked on).
* general: Convert assertion macros over to be fmt-compatibleLioncash2018-04-271-10/+9
|
* Merge pull request #387 from Subv/maxwell_2dbunnei2018-04-261-0/+4
|\ | | | | GPU: Partially implemented the 2D surface copy engine
| * Memory: Added a missing shortcut for Memory::CopyBlock for the current process.Subv2018-04-251-0/+4
| |
* | core/memory: Amend address widths in assertsLioncash2018-04-251-2/+2
| | | | | | | | Addresses are 64-bit, these formatting specifiers are simply holdovers from citra. Adjust them to be the correct width.
* | core/memory: Move logging macros over to new fmt-capable onesLioncash2018-04-251-22/+24
|/ | | | While we're at it, correct addresses to print all 64 bits where applicable, which were holdovers from citra.
* gl_rasterizer_cache: Update to be based on GPU addresses, not CPU addresses.bunnei2018-04-251-16/+48
|
* memory: Fix cast for ReadBlock/WriteBlock/ZeroBlock/CopyBlock.bunnei2018-03-271-4/+8
|
* memory: Add RasterizerMarkRegionCached code and cleanup.bunnei2018-03-271-200/+190
|
* Merge pull request #265 from bunnei/tegra-progress-2bunnei2018-03-241-0/+40
|\ | | | | Tegra progress 2
| * memory: Fix typo in RasterizerFlushVirtualRegion.bunnei2018-03-231-3/+3
| |
| * memory: RasterizerFlushVirtualRegion should also check process image region.bunnei2018-03-231-0/+1
| |
| * rasterizer: Flush and invalidate regions should be 64-bit.bunnei2018-03-231-2/+2
| |
| * memory: Port RasterizerFlushVirtualRegion from Citra.bunnei2018-03-231-0/+39
| |
* | Remove more N3DS ReferencesN00byKing2018-03-221-9/+0
|/
* core: Move process creation out of global state.bunnei2018-03-141-15/+15
|
* memory: LOG_ERROR when falling off end of page tableMerryMage2018-02-211-0/+11
|
* memory: Silence formatting sepecifier warningsLioncash2018-02-141-21/+30
|
* memory: Replace all memory hooking with Special regionsMerryMage2018-01-271-317/+163
|
* memory: Return false for large VAddr in IsValidVirtualAddressRozlette2018-01-201-0/+3
|
* Remove gpu debugger and get yuzu qt to compileJames Rowe2018-01-131-40/+1
|
* fix macos buildMerryMage2018-01-091-4/+4
|
* core/video_core: Fix a bunch of u64 -> u32 warnings.bunnei2018-01-011-8/+8
|
* memory: Print addresses as 64-bit.bunnei2017-10-191-2/+2
|
* Merge remote-tracking branch 'upstream/master' into nxbunnei2017-10-101-143/+211
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | # Conflicts: # src/core/CMakeLists.txt # src/core/arm/dynarmic/arm_dynarmic.cpp # src/core/arm/dyncom/arm_dyncom.cpp # src/core/hle/kernel/process.cpp # src/core/hle/kernel/thread.cpp # src/core/hle/kernel/thread.h # src/core/hle/kernel/vm_manager.cpp # src/core/loader/3dsx.cpp # src/core/loader/elf.cpp # src/core/loader/ncch.cpp # src/core/memory.cpp # src/core/memory.h # src/core/memory_setup.h
| * Memory: Make WriteBlock take a Process parameter on which to operateSubv2017-10-011-10/+17
| |
| * Memory: Make ReadBlock take a Process parameter on which to operateSubv2017-10-011-12/+28
| |
| * Fixed type conversion ambiguityHuw Pascoe2017-09-301-14/+22
| |
| * Merge pull request #2961 from Subv/load_titlesbunnei2017-09-291-7/+18
| |\ | | | | | | Loaders: Don't automatically set the current process every time we load an application.
| | * Memory: Allow IsValidVirtualAddress to be called with a specific process parameter.Subv2017-09-271-7/+18
| | | | | | | | | | | | There is still an overload of IsValidVirtualAddress that only takes the VAddr and will default to the current process.
| * | Merge pull request #2954 from Subv/cache_unmapped_memJames Rowe2017-09-271-1/+16
| |\ \ | | |/ | |/| Memory/RasterizerCache: Ignore unmapped memory regions when caching physical regions
| | * Memory/RasterizerCache: Ignore unmapped memory regions when caching physical regions.Subv2017-09-261-1/+16
| | | | | | | | | | | | | | | | | | Not all physical regions need to be mapped into the address space of every process, for example, system modules do not have a VRAM mapping. This fixes a crash when loading applets and system modules.
| * | ARM_Interface: Implement PageTableChangedMerryMage2017-09-251-0/+5
| | |
| * | memory: Remove GetCurrentPageTablePointersMerryMage2017-09-241-4/+0
| | |
| * | memory: Add GetCurrentPageTable/SetCurrentPageTableMerryMage2017-09-241-1/+9
| |/ | | | | | | Don't expose Memory::current_page_table as a global.
| * Merge pull request #2842 from Subv/switchable_page_tableB3n302017-09-151-79/+74
| |\ | | | | | | Kernel/Memory: Give each process its own page table and allow switching the current page table upon reschedule
| | * Kernel/Memory: Make IsValidPhysicalAddress not go through the current process' virtual memory mapping.Subv2017-09-151-2/+1
| | |
| | * Kernel/Memory: Changed GetPhysicalPointer so that it doesn't go through the current process' page table to obtain a pointer.Subv2017-09-151-3/+62
| | |
| | * Kernel/Memory: Give each Process its own page table.Subv2017-09-101-75/+12
| | | | | | | | | | | | The loader is in charge of setting the newly created process's page table as the main one during the loading process.
| * | Use recursive_mutex instead of mutex to fix #2902danzel2017-08-291-2/+2
| | |
| * | Merge pull request #2839 from Subv/global_kernel_lockJames Rowe2017-08-241-1/+8
| |\ \ | | |/ | |/| Kernel/HLE: Use a mutex to synchronize access to the HLE kernel state between the cpu thread and any other possible threads that might touch the kernel (network thread, etc).
| | * Kernel/Memory: Acquire the global HLE lock when a memory read/write operation falls outside of the fast path, for it might perform an MMIO operation.Subv2017-08-221-1/+8
| | |
* | | memory: Log with 64-bit values.bunnei2017-09-301-8/+8
| | |
* | | core: Various changes to support 64-bit addressing.bunnei2017-09-301-22/+22
|/ /
* | Merge pull request #2799 from yuriks/virtual-cached-range-flushWeiyi Wang2017-07-221-52/+76
|\ \ | |/ |/| Add address conversion functions returning optional, Add function to flush virtual region from rasterizer cache
| * Memory: Add function to flush a virtual range from the rasterizer cacheYuri Kunde Schlesner2017-06-221-39/+52
| | | | | | | | | | | | This is slightly more ergonomic to use, correctly handles virtual regions which are disjoint in physical addressing space, and checks only regions which can be cached by the rasterizer.
| * Memory: Add TryVirtualToPhysicalAddress, returning a boost::optionalYuri Kunde Schlesner2017-06-221-4/+12
| |
| * Memory: Make PhysicalToVirtualAddress return a boost::optionalYuri Kunde Schlesner2017-06-221-9/+12
| | | | | | | | And fix a few places in the code to take advantage of that.
* | Memory: Fix crash when unmapping a VMA covering cached surfacesYuri Kunde Schlesner2017-06-221-5/+20
|/ | | | | | | | | | Unmapping pages tries to flush any cached GPU surfaces touching that region. When a cached page is invalidated, GetPointerFromVMA() is used to restore the original pagetable pointer. However, since that VMA has already been deleted, this hits an UNREACHABLE case in that function. Now when this happens, just set the page type to Unmapped and continue, which arrives at the correct end result.
* Memory: Add constants for the n3DS additional RAMYuri Kunde Schlesner2017-05-101-2/+6
| | | | This is 4MB of extra, separate memory that was added on the New 3DS.
* Revert "Memory: Always flush whole pages from surface cache"bunnei2016-12-181-10/+0
|
* Memory: Always flush whole pages from surface cacheYuri Kunde Schlesner2016-12-151-0/+10
| | | | | This prevents individual writes touching a cached page, but which don't overlap the surface, from constantly hitting the surface cache lookup.
* Expose page table to dynarmic for optimized reads and writes to the JITJames Rowe2016-11-251-6/+8
|
* memory: fix IsValidVirtualAddress for RasterizerCachedMemorywwylele2016-09-291-0/+3
| | | | RasterizerCachedMemory doesn't has pointer but should be considered as valid
* Use negative priorities to avoid special-casing the self-includeYuri Kunde Schlesner2016-09-211-1/+1
|
* Remove empty newlines in #include blocks.Emmanuel Gil Peyrot2016-09-211-4/+1
| | | | | | | This makes clang-format useful on those. Also add a bunch of forgotten transitive includes, which otherwise prevented compilation.
* Sources: Run clang-format on everything.Emmanuel Gil Peyrot2016-09-181-35/+49
|
* Memory: add ReadCString functionwwylele2016-08-271-0/+14
|
* Memory: Handle RasterizerCachedMemory and RasterizerCachedSpecial page types in the memory block manipulation functions.Subv2016-05-281-1/+60
|
* Memory: Make ReadBlock and WriteBlock accept void pointers.Subv2016-05-281-4/+4
|
* Memory: CopyBlockMerryMage2016-05-281-0/+41
|
* Memory: ZeroBlockMerryMage2016-05-211-0/+38
|
* Memory: ReadBlock/WriteBlockMerryMage2016-05-211-3/+74
|
* Memory: IsValidVirtualAddress/IsValidPhysicalAddressMerryMage2016-05-211-0/+21
|
* HWRasterizer: Texture forwardingtfarley2016-04-211-0/+140
|
* Memory: Do correct Phys->Virt address translation for non-APP linheapYuri Kunde Schlesner2016-03-061-1/+1
|
* Memory: Implement MMIOMerryMage2016-01-301-6/+80
|
* Fixed spelling errorsGareth Poole2015-10-091-2/+2
|
* memory: Get rid of pointer castsLioncash2015-09-101-14/+7
|
* Kernel: Add more infrastructure to support different memory layoutsYuri Kunde Schlesner2015-08-161-1/+4
| | | | | | This adds some structures necessary to support multiple memory regions in the future. It also adds support for different system memory types and the new linear heap mapping at 0x30000000.
* Memory: Move address type conversion routines to memory.cpp/hYuri Kunde Schlesner2015-08-161-1/+36
| | | | | These helpers aren't really part of the kernel, and mem_map.cpp/h is going to be moved there next.
* Memory: Fix unmapping of pagesYuri Kunde Schlesner2015-07-121-4/+2
|
* Common: Cleanup memory and misc includes.Emmanuel Gil Peyrot2015-06-281-3/+0
|
* Kernel: Add VMManager to manage process address spacesYuri Kunde Schlesner2015-05-271-4/+8
| | | | | | | | This enables more dynamic management of the process address space, compared to just directly configuring the page table for major areas. This will serve as the foundation upon which the rest of the Kernel memory management functions will be built.
* Memory: Use a table based lookup scheme to read from memory regionsYuri Kunde Schlesner2015-05-151-120/+123
|
* Memory: Read SharedPage directly from Memory::ReadYuri Kunde Schlesner2015-05-151-1/+2
|
* Memory: Read ConfigMem directly from Memory::ReadYuri Kunde Schlesner2015-05-151-1/+2
|
* Memmap: Re-organize memory function in two filesYuri Kunde Schlesner2015-05-151-0/+197
memory.cpp/h contains definitions related to acessing memory and configuring the address space mem_map.cpp/h contains higher-level definitions related to configuring the address space accoording to the kernel and allocating memory.