Skip to content

Shared GPU Memory: A Deep Dive into the Pros and Cons


In the ever-evolving landscape of digital technology, computer manufacturers are constantly seeking ways to optimize performance while keeping costs down. One such innovation is the use of shared GPU memory, a technique that allows the integrated graphics processing unit (GPU) to utilize a portion of the system‘s main memory (RAM) for graphical tasks. As a Digital Technology Expert, I have closely followed the development and implementation of this technology, and in this article, I will provide an in-depth analysis of shared GPU memory, exploring its inner workings, benefits, drawbacks, and potential future developments.

Understanding Shared GPU Memory

What is Shared GPU Memory?

Shared GPU memory is a technique where the integrated GPU, typically found on a computer‘s CPU, borrows a portion of the system‘s RAM to handle graphical processing tasks. This approach differs from dedicated GPUs, which have their own separate memory (VRAM) specifically designed for graphics-related tasks.

How Shared GPU Memory Works

When a graphical task is initiated, the integrated GPU assesses the available system RAM and allocates a portion of it for the task at hand. The amount of memory allocated depends on various factors, such as the complexity of the task, the total system RAM, and the manufacturer‘s predetermined settings.

The process of allocating and managing shared GPU memory involves a complex interplay between the CPU, GPU, and operating system. The CPU is responsible for initiating the allocation of system RAM to the GPU, while the GPU manages the allocated memory and uses it to store and process graphical data. The operating system acts as a mediator, ensuring that the allocation and management of shared memory are performed efficiently and securely.

It‘s important to note that the system RAM used for shared GPU memory is typically slower than the dedicated VRAM found on discrete GPUs. This difference in memory speed can result in lower performance compared to dedicated GPUs, especially for graphically intensive tasks like gaming or 3D rendering.

Differences in Implementation among Hardware Manufacturers

The implementation of shared GPU memory varies among hardware manufacturers such as Intel, AMD, and Nvidia. Each company has its own proprietary technologies and approaches to managing shared memory.

For example, Intel‘s integrated GPUs use a technology called Dynamic Video Memory Technology (DVMT), which dynamically allocates a portion of the system RAM for graphics processing. AMD, on the other hand, employs a technology called High Bandwidth Memory (HBM), which offers higher memory bandwidth and lower power consumption compared to traditional GDDR memory used in dedicated GPUs.

Advantages of Shared GPU Memory


One of the primary advantages of shared GPU memory is its cost-effectiveness. By utilizing the system‘s existing RAM, shared GPU memory eliminates the need for a dedicated GPU with its own VRAM. This approach can significantly reduce the overall cost of a computer, making it more accessible to a broader range of users.

According to a report by Jon Peddie Research, the adoption of integrated GPUs with shared memory has steadily increased over the years, with a market share of over 60% in the GPU market as of 2020 (Peddie, 2021). This trend highlights the growing demand for cost-effective graphics solutions, particularly in the budget-friendly consumer market.

Power Efficiency

Another benefit of shared GPU memory is its power efficiency. Integrated GPUs that use shared memory consume less power compared to dedicated GPUs, as they do not require a separate power supply and cooling system. This power efficiency is particularly beneficial for laptops and mobile devices, where battery life is a crucial factor.

A study conducted by the University of Cambridge found that integrated GPUs with shared memory can consume up to 50% less power compared to dedicated GPUs while performing similar tasks (Smith et al., 2019). This finding underscores the potential of shared GPU memory in reducing the power consumption and carbon footprint of computing devices.

Improved Performance for Specific Tasks

While shared GPU memory may not match the performance of dedicated GPUs, it can still provide a significant boost over relying solely on the CPU for graphical processing. This boost is especially noticeable in tasks such as video playback, photo editing, and basic 3D modeling.

In a benchmark test conducted by PCMag, an Intel Core i7-1165G7 processor with integrated Intel Iris Xe graphics (which uses shared GPU memory) achieved a 3DMark Night Raid score of 16,163, surpassing the performance of some entry-level dedicated GPUs (Burek, 2021). This result demonstrates the potential of shared GPU memory in handling moderately demanding graphical tasks.

Drawbacks of Shared GPU Memory

Lower Performance Compared to Dedicated GPUs

One of the main drawbacks of shared GPU memory is its lower performance compared to dedicated GPUs. The system RAM used for shared GPU memory is typically slower than the dedicated VRAM found on discrete GPUs, resulting in lower performance, particularly in graphically demanding applications like gaming or complex 3D rendering.

A comparative analysis by Tom‘s Hardware found that a dedicated Nvidia GTX 1650 GPU outperformed an Intel Core i7-1165G7 with integrated Iris Xe graphics by an average of 60% across various gaming benchmarks (Alcorn, 2021). This performance gap highlights the limitations of shared GPU memory in handling highly intensive graphical tasks.

Limited Gaming Capabilities

Another drawback of shared GPU memory is its limited gaming capabilities. While shared GPU memory can handle casual and less demanding games, it struggles with more graphically intensive titles. Gamers who prioritize high-quality visuals and smooth gameplay will likely find shared GPU memory insufficient for their needs.

According to a survey conducted by the gaming platform Steam, only 8% of users relied on integrated GPUs with shared memory for gaming, while the majority (over 70%) used dedicated GPUs (Steam, 2021). This data suggests that dedicated GPUs remain the preferred choice for serious gamers who require the best possible performance.

Potential System Performance Impact

Since shared GPU memory borrows RAM from the system, it can potentially impact the overall performance of the computer. If the GPU requires a significant portion of the system‘s RAM, other processes may be left with insufficient memory, leading to slowdowns or even system instability.

A study by the University of Wisconsin-Madison found that when shared GPU memory usage exceeded 50% of the total system RAM, the performance of other applications running simultaneously decreased by an average of 20% (Johnson et al., 2020). This finding highlights the importance of balancing shared GPU memory usage with other system resources to maintain optimal overall performance.

The Future of Shared GPU Memory

As technology continues to evolve, we can expect to see advancements in shared GPU memory. Some potential developments include:

Faster System RAM

With the introduction of newer memory standards like DDR5, system RAM is becoming faster and more efficient. These advancements could help bridge the performance gap between shared GPU memory and dedicated VRAM.

A report by IC Insights predicts that DDR5 memory will account for over 50% of the DRAM market by 2025 (McGrath, 2021). This forecast suggests that the adoption of faster system RAM will become more widespread, potentially benefiting the performance of shared GPU memory.

Intelligent Memory Allocation

Future systems may employ more sophisticated algorithms to dynamically allocate shared memory based on the specific needs of each application. This approach could lead to better performance and more efficient resource utilization.

Researchers at the University of Michigan have developed a machine learning-based memory allocation framework that can predict the memory requirements of applications and optimize the allocation of shared GPU memory accordingly (Patel et al., 2021). Such innovative solutions could pave the way for more intelligent and adaptive shared GPU memory management in the future.

Integration with Emerging Technologies

Shared GPU memory could play a role in the development of AI and machine learning applications, particularly in edge computing scenarios where cost and power efficiency are crucial factors.

A report by McKinsey & Company estimates that the global edge computing market will reach $215 billion by 2025, driven by the growth of IoT devices and the need for real-time data processing (Batra et al., 2021). In this context, shared GPU memory could provide a cost-effective and power-efficient solution for enabling AI and machine learning capabilities on edge devices.


Shared GPU memory is a clever technique that allows integrated GPUs to utilize a portion of the system‘s RAM for graphical processing tasks. While it may not match the performance of dedicated GPUs, shared memory offers several benefits, including cost-effectiveness, power efficiency, and improved performance for specific tasks.

However, it‘s essential to understand the limitations of shared GPU memory, such as lower performance in graphically demanding applications and potential impact on overall system performance. By weighing the pros and cons and considering your specific needs, you can make an informed decision on whether shared GPU memory is the right choice for you.

As technology advances, we can expect to see further developments in shared GPU memory, making it an even more viable option for a wide range of users and applications. Whether you‘re a casual user, a business professional, or an educator, shared GPU memory can provide a balanced solution that meets your graphical processing needs while keeping costs and power consumption in check.


Alcorn, P. (2021). Intel Iris Xe vs. Nvidia GTX 1650: Which GPU is better for gaming? Tom‘s Hardware.

Batra, G., Jacobson, Z., Madhav, S., Queirolo, A., & Santhanam, N. (2021). Unlocking the potential of edge computing. McKinsey & Company.

Burek, J. (2021). Intel Core i7-1165G7 review: Impressive Iris Xe integrated graphics. PCMag.

Johnson, D., Chakravarthy, S., & Adve, S. (2020). Balancing shared GPU memory usage and system performance. Proceedings of the ACM International Conference on Parallel Architectures and Compilation Techniques, 126-137.

McGrath, D. (2021). DDR5 poised to overtake DDR4 in DRAM market. EE Times.

Patel, H., Kang, Y., & Mozafari, B. (2021). MLMem: A machine learning-based memory allocation framework for shared GPU memory. Proceedings of the USENIX Annual Technical Conference, 819-832.

Peddie, J. (2021). GPU market report – Q4 2020. Jon Peddie Research.

Smith, J., Xue, Y., & Liang, X. (2019). Comparative analysis of power consumption between integrated and discrete GPUs. Proceedings of the International Symposium on Low Power Electronics and Design, 1-6.

Steam. (2021). Steam hardware & software survey: March 2021. Valve Corporation.