The Raven Ridge APU is very capable processor to handle OpenCL inside some applications like Blender, Darktable and Gimp. Unfortunately, the current implementation from Mesa, clover, stuck to 1.3, is not supported. AMD released their driver 18.40 with OpenCL2.0+ targeting only Red Hat Enterprise Linux/Cent OS 6.10 and 7.5 in addition of Ubuntu LTS. The good new is the former rpm format can be used on Fedora. The graphical part of Raven Ridge is Vega 8, basically a cut-down of Vega56 or Vega64 meaning choosing either driver for RX Vega . The instruction is provided for extracting the rpm files but here is some requirements for OpenCL: kernel-devel (provided by Fedora repository) amdgpu-dkms dkms libopencl-amdgpu-pro opencl-amdgpu-pro-icd Once done, applications needing OpenCL will automatically detect the driver located on /opt/amdgpu/lib64 . Blender will list as unknown AMD GPU and Darktable will enable it. OpenCL from official AMD driver enabled on Darktable ...
Comments
better yet see if you can get the full log of the screen session used to do create the data for the benchmarks.
There could be a flaw in the methodology that biases towards apt operation. Can't know if it was a fair test unless the specifics of how to run the test are made available for review.
-jef
"That’s true, but it’s rather hard to test, especially when connection speeds come into it and finding packages the same size. I think it’s safe to say that you’ll be downloading less by using the deltas than you would on Debian.
For these tests, all packages were pre-downloaded so bypass the network download issue."
I find that odd since, based on there functions, network downloads are what they do. Even with the localinstall flag, fetching from the internet is what they are designed for... so this review seem to be designed to do something that yum nor apt happened to be designed for?
I don't know, that seems odd... Maybe I am wrong?
Duv
You are wrong, in a way. Network activity across the external internet can be subject to all sorts of different factors outside of your control. Such network activity would distort the timing benchmarks greatly.
Its actually far more scientific if you work from local caches of the repository metadata and the packages to avoid any external network activity during the benchmark tests. These tools do all their dep resolution transactions from local data stores regadless. Excluding the variability of network access is actually the most appropriate thing to do if your goal is to test transaction speed.
That being said, pulling packages and/or repository level metadata across the network has an objective cost in terms of user interaction perception, that is worth talking about, if you can adequately separate that out from other operations.
In fact my biggest concern is that the benchmark author did not adequately segregate network activity for repository metadata when doing the test.
To make sense of the benchmarks I need details of the methodology used to run the tests, so I can reproduce the tests for myself.
-jef
Also the way both yum and apt differ is due to the fact yum is designer for smooth transaction like for example: yum update is equivalent ot apt-get dist && apt-get update (I can't remember as I haven't used it for a while).
duvword: The author and writer is the same. He used to build Kororaa, a Gentoo based distribution, and mostly manually update package. I am updating the blog to add his blog so issues can be discussed.