Welcome, Guest
Username Password: Remember me

TOPIC: Graphics Card Question

Re: Graphics Card Question 1 year, 1 month ago #198237

  • hugly
  • OFFLINE
  • Platinum Boarder
  • Posts: 24915
  • 1 year, 1 month ago
No, of course I can't get the full GPU performance on a VM and I use the Linux versions only for test purposes, but it works and with my 1920x it isn't that slow.
It's better to travel well than to arrive...

Re: Graphics Card Question 1 year, 1 month ago #198239

  • David Rasberry
  • Pro User
  • NOW ONLINE
  • Platinum Boarder
  • Posts: 2662
  • 1 year, 1 month ago
The biggest advantage to an Nvidia Quadro GPU is getting 10 bit color output to a monitor if the monitor is 10 bit capable. The GTX series will output 10bit color, but typically only in games,not in pro graphics applications. Difference in drivers.
Performance wise, the CPU is typically more of a bottleneck because NLE's do all compressed video decoding in the CPU. The GPU is only used for output rendering to the screen.
On export GPU renders output to a temp file, CPU encodes it to final format.
A CPU and motherboaard with more cores, more PCIE lanes and 4GB of ram per core will improve timeline performance more than just adding a high end GPU to a quad core system.
Razz

Digital Bolex 2k Cinema DNG raw camera
Canon GL2 DV camcorder
iPAD Mini 3 Iographer rig

Workstation: Intel i7-4770k, Asrock Z87 Thunderbolt 2 MB, 16GB 1866 DDR3 ram,
2TB Seagate Hybrid system drive, 2TB Seagate NAS media drive, E-sata III hot swap drive bay, Nvidia GTX760 2GB GPU
Lightworks kybrd. Shuttlepro v2
Win10 Pro 64bit, Lightworks 14.0 64bit

Mobile Workstation: MSI GTX72 Dominator
Intel i7-6700HQ 2.7GHz Win10 64bit
16GB DDR4 ram, 500GB M.2 SSD
Nvidia GTX970 3GB GPU
USB3, USB3.1-C, Thunderbolt 3 ports
Shuttlepro2 Win10 64bit LW 14.0 64 bit

Re: Graphics Card Question 1 year, 1 month ago #198271

  • FathomStory
  • Pro User
  • OFFLINE
  • Gold Boarder
  • Posts: 243
  • 1 year, 1 month ago
@David Raspberry, Thanks, it is frustrating to have to think about upgrading my mobo et al when (for me) I already spent a lot on it.

Here is something interesting: I recently was able to get Windows 10 on one of my desktop hard drives and boot off it. I then installed Lightworks (I am allowed to install on two machines, I believe) and did a GPU test. Here are the results:

GPU on Ubuntu 18.04 with RX 560 GPU AMD GPU Pro (version 19.10):
41.88 fps

testing shader performance: 29541.16 fps

Testing Playback Performance:320.3fps

Testing render performance: 125.02 fps

On Windows 10:

614.72 fps

testing shader performance: 7650.92 fps

Testing Playback Performance: 380.05fps

Testing render performance: 200.02 fps

That said, I don't like Windows as an OS and trust it as far as I can throw it. But the Linux drivers do seem to need some catching up.

Re: Graphics Card Question 1 year, 1 month ago #198272

  • schrauber
  • OFFLINE
  • Platinum Boarder
  • Posts: 4323
  • 1 year, 1 month ago
FathomStory wrote:
.. it is frustrating to have to think about upgrading my mobo et al when (for me) I already spent a lot on it..
If the Lightworks-created proxies are an option for your workflow, just leave the hardware untouched.

FathomStory wrote:
... But the Linux drivers do seem to need some catching up.
Details can be found in this linked this very long thread.
Mainly automatically translated
--------------------------------------------
Software: Lightworks 2020.1; || Windows 10, 64 Bit
Hardware: Intel i5-4440 (3,1 GHz); || shared RAM: 8 GB; || Intel HD Graphics 4600 (can use max. 2 GB of shared RAM)
Last Edit: 1 year, 1 month ago by schrauber.

Re: Graphics Card Question 1 year, 1 month ago #198273

  • FathomStory
  • Pro User
  • OFFLINE
  • Gold Boarder
  • Posts: 243
  • 1 year, 1 month ago
@schrauber Thanks! What a thread! Bwhahahahaha!

I am half way through it.

I could have told them that the problem is probably the drivers. If you look at the graphics development teams for Windows and Linux, it is night and day. Since the marketshare for Windows is far larger, so are the dev teams. It is a handful of devs on Linux vs an army for Windows. Simply look at the frequency of drivers released on Windows. Almost weekly if not more versus once every couple of months on Linux. For the comparatively small resources allocated to Linux, I think it does pretty durn good. Linux is a far more stable OS vs the casino experience (will it work today?) of Windows.

Re: Graphics Card Question 6 months, 2 weeks ago #210301

  • G0bble
  • Pro User
  • OFFLINE
  • Gold Boarder
  • Posts: 292
  • 6 months, 2 weeks ago
FWIW I ended up upgrading my system a few months sooner than I had planned.

On a sheer whim - I got a WX5100 Radeon Pro GPU to pair with a 3900X. I had originally planned the RX5600+3700X. but in my haste I threw away all my plans and spent $100 extra on a GPU that is a generation or two behind and scores about half on the benchmarks. And then I spend $150 extra on the CPU that doesn't seem to maximize usage when using LWKS in any case. See the attached pics for example. These are utilization figures during an export. I thought that by using more cores the time to export/render would decrease linearly but I was sorely mistaken. The CPU never maxes out to reduce export time - it runs around 50-60% and the GPU rarely runs at max clock speed - rather usually at 430mhz although it is at 40- 90% in D3D utilization depending on whether I am rendering or exporting without rendering first. The GPU encode engines of which there are 4 in the WX5100 and not used at all.


This image is hidden for guests. Please log in or register to see it.


This image is hidden for guests. Please log in or register to see it.



Moreover my system is now unbalanced with GPU bottleneck - too weak a GPU paired with too powerful a CPU to use effectively. The only upside - I can multitask effortless without a hint of sluggishness when exporting or rendering.

G
Last Edit: 6 months, 2 weeks ago by G0bble.

Re: Graphics Card Question 6 months, 2 weeks ago #210302

  • David Rasberry
  • Pro User
  • NOW ONLINE
  • Platinum Boarder
  • Posts: 2662
  • 6 months, 2 weeks ago
Dedicated GPU hardware encoders aren't used by NLE's.
Razz

Digital Bolex 2k Cinema DNG raw camera
Canon GL2 DV camcorder
iPAD Mini 3 Iographer rig

Workstation: Intel i7-4770k, Asrock Z87 Thunderbolt 2 MB, 16GB 1866 DDR3 ram,
2TB Seagate Hybrid system drive, 2TB Seagate NAS media drive, E-sata III hot swap drive bay, Nvidia GTX760 2GB GPU
Lightworks kybrd. Shuttlepro v2
Win10 Pro 64bit, Lightworks 14.0 64bit

Mobile Workstation: MSI GTX72 Dominator
Intel i7-6700HQ 2.7GHz Win10 64bit
16GB DDR4 ram, 500GB M.2 SSD
Nvidia GTX970 3GB GPU
USB3, USB3.1-C, Thunderbolt 3 ports
Shuttlepro2 Win10 64bit LW 14.0 64 bit

Re: Graphics Card Question 6 months, 2 weeks ago #210303

  • G0bble
  • Pro User
  • OFFLINE
  • Gold Boarder
  • Posts: 292
  • 6 months, 2 weeks ago
David Rasberry wrote:
Dedicated GPU hardware encoders aren't used by NLE's.


Right. I was wondering more about why the cpu isnt maxed out to finish the export in double-quick time.
Here is how it looks when simply rendering on Linux (Fedora)

This image is hidden for guests. Please log in or register to see it.


Seems to utilize the GPU a bit more with OpenGL than D3D but just a little. I did observe the render process maxing out the GPU clock to its full for short periods whle the CPU was still at 40%.

Before I upgraded, I imagined that all the CPU cores would max-out at 100% and the export would finish real quick. At least both the rendering and export happen smooth without slowing down the rest of my system so thats a definite plus.

To all the readers of this post - I would love to figure out what's holding the system at a steady less than 60% utilization when spare capacity exists... and then take extra time to finish. Does it behave the same on your system(s)?

G
Last Edit: 6 months, 2 weeks ago by G0bble.

Re: Graphics Card Question 6 months, 2 weeks ago #210304

  • schrauber
  • OFFLINE
  • Platinum Boarder
  • Posts: 4323
  • 6 months, 2 weeks ago
The advantages of a powerful GPU show with Lightworks mainly when using many and, or complex effects. Some user effects can significantly increase the "3D" GPU load. Which load third party applications like Boris create, I do not know.

So if you have about 100% GPU load after adding or activating effects with a less powerful GPU, and your CPU load drops from 50% to 25%, you can probably at best halve the export time with a faster GPU, if that reduces the GPU load below 100% and the CPU load increases back to the original 50%.


G0bble wrote:
.. I would love to figure out what's holding the system at a steady less than 60% utilization when spare capacity exists... and then take extra time to finish. Does it behave the same on your system(s)?

It's hard to say, and I don't think we can generalize. With less than 100% GPU load, the CPU usage on my low performance system depends on the media (Source media and export medium), the Lightworks version and probably the timeline (how many media have to be decoded at the same time).
Concerning hardware I have only read assumptions which are related to data transfer and multiple core management:
- RAM speed
- CPU architecture
- Drive speed and transfer rate (for very high bitrate media)
- Data transfer between CPU and GPU (scaling and effects are done by the GPU)

Personally, however, I only have the integrated Intel GPU, which only allows slow effect processing, but probably can exchange data more directly with the CPU, because both are on the same chip.
:pinch: Warning: Spoiler!


EDIT: On my system with only 4 cores, however, these cores usually reach a load of over 80% when exporting 720p-mp4. I see much lower values of < 50% mainly during playback, although for smooth playback of 4k media, I would need more CPU usage (depending on the media).
Mainly automatically translated
--------------------------------------------
Software: Lightworks 2020.1; || Windows 10, 64 Bit
Hardware: Intel i5-4440 (3,1 GHz); || shared RAM: 8 GB; || Intel HD Graphics 4600 (can use max. 2 GB of shared RAM)
Last Edit: 6 months, 2 weeks ago by schrauber.

Re: Graphics Card Question 6 months, 2 weeks ago #210305

  • hugly
  • OFFLINE
  • Platinum Boarder
  • Posts: 24915
  • 6 months, 2 weeks ago
G0bble wrote:
Before I upgraded, I imagined that all the CPU cores would max-out at 100%

In fact it's exactly the opposite way. With the same load (read: when exporting the same sequence), enhanced CPU and GPU performance will reduce the average load on CPU and GPU. Why is this? The entire pipeline from reading the source file to writing to destination, uses more components than just the GPU and CPU cores. These are mainly all activities that need moving data around in memory over the PCI bus. However, up to a certain point, which is when the PCI bus, memory access, or I/O start getting the main bottleneck, export will be faster, although the average load on GPU and CPU has dropped.

That isn't only theory. That what I see when comparing my old i7 2600 16 GB, GTX 1050 4GB with my current 1920x 32GB, GTX 1060 6GB.
It's better to travel well than to arrive...
Last Edit: 6 months, 2 weeks ago by hugly.

Re: Graphics Card Question 6 months, 2 weeks ago #210306

  • hugly
  • OFFLINE
  • Platinum Boarder
  • Posts: 24915
  • 6 months, 2 weeks ago
Just a side note: With current 2020 beta, the export time for the reference sequence we've used to compare performance in the discussion linked to by schrauber has dropped by appr. 30 percent on my system *** and that's remarkable.

Also, Microsoft has improved Ryzen support with Win10 1903, which has also measurable impact to Lightworks export performance, as my tests show.

pureinfotech.com/windows-10-1903-changes-amd-ryzen-processors/

Edit: *** When comparing V14.5 vs 2020 on my current system.
Both improvements together - older Win10 with V14.5 vs 2020 with most recent Win10, have reduced the export time of our reference sequence in 720p24 h.264, high profile, 5 Mbps from 21 sec to 12 seconds, that's 75% faster.
It's better to travel well than to arrive...
Last Edit: 6 months, 2 weeks ago by hugly.

Re: Graphics Card Question 6 months, 2 weeks ago #210307

  • FathomStory
  • Pro User
  • OFFLINE
  • Gold Boarder
  • Posts: 243
  • 6 months, 2 weeks ago
It's up the software developers to optimize the GPU for the app. Avid Media advertises this www.amd.com/en/graphics/workstation-media-and-entertainment-solutions-avid-media-composer. But Avid is a unique case in point, most NLE's do fine with gaming GPU's. Programs like CAD will do better with AMD Pro graphics, even more than gaming cards, because it is designed to leverage that GPU. It helps when Microsoft and AMD drivers devs pitch in, but that is only part of the ingredients.

Re: Graphics Card Question 6 months, 2 weeks ago #210313

  • G0bble
  • Pro User
  • OFFLINE
  • Gold Boarder
  • Posts: 292
  • 6 months, 2 weeks ago
schrauber wrote:
The advantages of a powerful GPU show with Lightworks mainly when using many and, or complex effects. Some user effects can significantly increase the "3D" GPU load. Which load third party applications like Boris create, I do not know.

So if you have about 100% GPU load after adding or activating effects with a less powerful GPU, and your CPU load drops from 50% to 25%, you can probably at best halve the export time with a faster GPU, if that reduces the GPU load below 100% and the CPU load increases back to the original 50%.


Hmm .. but the perplexing thing is even when D3D maxes out at 90% the GPU clock only stays at 430Mhz. Only the Unigine Tropical GPU benchmark has spiked the GPU clock rate to its full (but spiky way) so far - apart from the Linux OpenGL export after render test which seems to hold a steady 90% GPU clock with a steady 40% CPU balanced across all cores.

I am tempted to buy the RX5600 when it is launched just to experiment the above idea - I had my heart fixed on it anyways ... like a child that saw a toy in a shop window and it became an instant favorite. Why not a 5700? I started out the upgrade plan to stay within limits of my CX400W PSU. So a 65WTDC CPU and 120WTDP GPU was max I was going to risk. The actual power draw would usually be about 50% higher than TDP rating but still manageable with a well built Corsair PSU that can pump out 400W steady not just at peak. Then I ended up getting a CPU at 105WTDP and paired it with a WX5100 at 75WTDP to stay within the envelope. It is a different story that after I spent more than $1200 upgrading (including on a new mobo) I experienced an abrupt power loss twice - when maxing the CPU on Cinebench when running on backup power. So I put in a watt meter on my UPS to discover that the model is a Dud - for a 1100VA rated power it can only handle 450W of peak load wall power draw before beeping the overload signal and shutting down. At 80% efficiency I expected 800W power handling capacity but alas... So that means I now need to spend another $200ish on a new better UPS. I don't know whether to laugh or to cry. Might as well have spent another $60-70 on a 650W PSU and saved $20 in purchasing a Radeon 5700 at 180W TDP and slightly higher power consumption that is 4x as powerful as the Radeon Pro that I got. Yes the 5700 was $20-30 less than the workstation class GPU and I skipped it!! The RX5600 might still fit in my power envelop if I remain stubborn and refuse to upgrade the PSU ...

schrauber wrote:

Personally, however, I only have the integrated Intel GPU, which only allows slow effect processing, but probably can exchange data more directly with the CPU, because both are on the same chip.
:pinch: Warning: Spoiler!


EDIT: On my system with only 4 cores, however, these cores usually reach a load of over 80% when exporting 720p-mp4. I see much lower values of < 50% mainly during playback, although for smooth playback of 4k media, I would need more CPU usage (depending on the media).


Those are decent and expected results for an iGPU that has to fetch from RAM. on the Ryzen 2400G I upgraded the RAM latency was pegged at 74nanosec while online benchmark scores all peg Intel RAM latency at 27-37nanosecs (something to do with Intel's memory controller). With the extra L2/L3 cache that Intel always provides compared to AMD's measly 2+4MB it is still better. Besides LWKS can use intel's QuickSync although other AMD platforms revert to CPU only decode/encode.

Here is how the WX5100 looks in the windows GPU test.

This image is hidden for guests. Please log in or register to see it.



G
Last Edit: 6 months, 2 weeks ago by G0bble.

Re: Graphics Card Question 6 months, 2 weeks ago #210316

  • G0bble
  • Pro User
  • OFFLINE
  • Gold Boarder
  • Posts: 292
  • 6 months, 2 weeks ago
FathomStory wrote:
It's up the software developers to optimize the GPU for the app. Avid Media advertises this www.amd.com/en/graphics/workstation-media-and-entertainment-solutions-avid-media-composer. But Avid is a unique case in point, most NLE's do fine with gaming GPU's. Programs like CAD will do better with AMD Pro graphics, even more than gaming cards, because it is designed to leverage that GPU. It helps when Microsoft and AMD drivers devs pitch in, but that is only part of the ingredients.


That link is a page not found. Found a better article here FYI: www.avidblogs.com/how-avid-media-composer-uses-a-computer/

A good read for users of NLEs in general.

G

Re: Graphics Card Question 6 months, 2 weeks ago #210317

  • G0bble
  • Pro User
  • OFFLINE
  • Gold Boarder
  • Posts: 292
  • 6 months, 2 weeks ago
hugly wrote:
Just a side note: With current 2020 beta, the export time for the reference sequence we've used to compare performance in the discussion linked to by schrauber has dropped by appr. 30 percent on my system *** and that's remarkable.

Also, Microsoft has improved Ryzen support with Win10 1903, which has also measurable impact to Lightworks export performance, as my tests show.

pureinfotech.com/windows-10-1903-changes-amd-ryzen-processors/

Edit: *** When comparing V14.5 vs 2020 on my current system.
Both improvements together - older Win10 with V14.5 vs 2020 with most recent Win10, have reduced the export time of our reference sequence in 720p24 h.264, high profile, 5 Mbps from 21 sec to 12 seconds, that's 75% faster.


Thats an awesome improvement! I think the Mainconcept media SDK version on this release does something different that the previous versions. I noticed that LWKS does not use GPU for decode encode (as documented) but with the exported media the GPU usage spikes while CPU is close to 0% for my 2K mp4 files.

Does 2020 show improved GPU utilization when rendering/exporting?

I am on the latest Win10 with Ryzen power plans not the Windows default so good on that front.

G
Last Edit: 6 months, 2 weeks ago by G0bble.
Time to create page: 0.44 seconds
Scroll To Top