paraview needs higher OpenGL in Mesa - qt

Im trying to use paraview 5.3.0 in CentOS.
I compiled it with Qt5. When I start paraview it tells me:
GL version 2.1 with the gpu_shader4 extension is not supported by your graphics driver but
is required for the new OpenGL rendering backend. Please update your OpenGL driver. If you
are using Mesa please make sure you have version 10.6.5 or later and make sure your driver
in Mesa supports OpenGL 3.2.
Here is the OnBoard graphics card:
lspci |grep VGA
03:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. Device 0536 (rev 04)
And the glxinfo:
glxinfo | grep OpenGL
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.9, 256 bits)
OpenGL version string: 2.1 Mesa 17.0.1
OpenGL shading language version string: 1.30
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 2.0 Mesa 17.0.1
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 1.0.16
OpenGL ES profile extensions:
I dont understand what "... make sure your driver in Mesa ..." means.
kind regards

When using llvmpipe/gallium with mesa, a software renderer, the OpenGL capabilities can be incorrectly detected. The simplest way to fix that is to force it :
MESA_GL_VERSION_OVERRIDE=3.3 ./bin/paraview

The llvmpipe is the software rendering driver.
I don't know if you are running CentOS on a VM or not (even if glxinfo says "OpenGL vendor string: VMware, Inc." that doesn't necessary imply you are running on VMware).
If you are not running CentOS as a VM guest, consider installing the appropriate mesa drivers for your video card.

Related

Kivy OpenGL requirements feasible for deployment?

I'm currently in the process of finding a nice GUI framework for my new project - and Kivy looks quite good.
There are many questions here (like this one) about Kivy requiring OpenGL >2.0 (not accepting 1.4) and problems arising from that. As I understood, it's the graphics drivers thing to provide a decent OpenGL version.
I'm concerned what problems I'll have deploying my app to users having a certain configuration, that they will not be willing or able to have OpenGL >2.0 on their desktop.
First off, deploying on Windows in regard to OpenGL would not be a problem.. Good support there.
But I'm specifially concerned about people (like me) having an Ubuntu installation (14.4 LTS) with the latest Nvidia binary driver from Ubuntu. It's just the best driver currently, having the best performance (still far superior to nouveau IMHO)..
And it seems (or am I wrong? that would be great) that this driver only provides OpenGL 1.4
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions: [...]
So my question is two-fold
I'm I maybe wrong the nvidia binary driver only supporting OpenGL 1.4?
If yes, doesn't that exclude many users having a quite common configuration (all Ubuntu users using Nvidia cards) from people able to use my Kivy application?
Any way to circumvent that?
I know OpenGL 1.4 is silly old stuff, but the driver is current and the hardware too (GTX 770, quite a beast..)..
Installed driver:
root#host:/home/user# apt-cache policy nvidia-331-updates
nvidia-331-updates:
Installed: 331.38-0ubuntu7
Candidate: 331.38-0ubuntu7
Version table:
Nvidia information:
Version: 331.38
Release Date: 2014.1.13
I really hope I'm wrong..
EDIT There has been said 1.4 is the GLX version, not the OpenGL version.. I've seen that now - but I thought it's 1.4 because when I try to execute an example from the dist, I get this error:
vagrant#ubuntu-14:/usr/local/share/kivy-examples/guide/firstwidget$ python 1_skeleton.py
[WARNING] [Config ] Older configuration version detected (0 instead of 10)
[WARNING] [Config ] Upgrading configuration in progress.
[INFO ] [Logger ] Record log in /home/vagrant/.kivy/logs/kivy_14-06-28_0.txt
[INFO ] Kivy v1.8.1-dev
[INFO ] [Python ] v2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2]
[INFO ] [Factory ] 169 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_pygame, img_gif (img_pil ignored)
[INFO ] [Window ] Provider: pygame(['window_egl_rpi'] ignored)
libGL error: failed to load driver: swrast
[INFO ] [GL ] OpenGL version <1.4 (2.1.2 NVIDIA 331.38)>
[INFO ] [GL ] OpenGL vendor <NVIDIA Corporation>
[INFO ] [GL ] OpenGL renderer <GeForce GTX 770/PCIe/SSE2>
[INFO ] [GL ] OpenGL parsed version: 1, 4
[CRITICAL] [GL ] Minimum required OpenGL version (2.0) NOT found!
OpenGL version detected: 1.4
Version: 1.4 (2.1.2 NVIDIA 331.38)
Vendor: NVIDIA Corporation
Renderer: GeForce GTX 770/PCIe/SSE2
Try upgrading your graphics drivers and/or your graphics hardware in case of problems.
So it actually parses my OpenGL version as 1.4..
EDIT 2: I'm running Kivy from github (master branch) as of today (28th june), so that should be fairly new ;-)
That's not the OpenGL version! It's the GLX version.
AFAIK GLX 1.4 is the latest release. You can use the glxinfo command to check all version numbers. On my machine I get:
$glxinfo | grep 'GLX version'
GLX version: 1.4
$glxinfo | grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ironlake Mobile
OpenGL version string: 2.1 Mesa 10.1.3
OpenGL shading language version string: 1.20
OpenGL extensions:
See also How can I check my OpenGL version? for further details.
Summary: you don't have to worry about OpenGL 2.0 compatibility. Almost every device has support for OpenGL 2.0 nowadays. For example, only 0.1% of Android devices supports only OpenGL version 1.1.
I've answered a very similar question before, here. The answer is pasted below for reference:
Kivy, for which OpenGL 2.0 seems to be mandatory.
Strictly, Kivy targets OpenGL ES 2.0 as the minimum requirement. This
is not the same as OpenGL 2.0.
Well, the question is simple. At home I have three computers of which two are quite old with integrated graphics cards, which do not
support OpenGL 2.0.
This is fairly unusual nowadays. Even mobile devices have almost all
supported it for years (where to be clear, 'it' is the opengl es 2 features that kivy relies on).
The only places you tend to see lack of support are older machines
with integrated graphics, like yours, though I have no statistics on
how common these are. Any machine with a 'proper' graphics card, or
integrated graphics from the last few years (e.g. intel's integrated
with sandy bridge etc.), will almost certainly work fine.
I've seen occasional problem in newer machines, e.g. some netbooks
with particularly poorly supported graphics chips, but these are very
much the exception rather than the norm.
Edit: For reference, Google seems to claim that 99.9% of
devices
support OpenGL ES 2 (at the time of writing).
Overall, it's extremely unlikely that you (or indeed, anyone who uses your application) would have any problems related to this.

How can I use OpenGL 3.3 Core Profile in Qt 5.4?

I have the following output from glxinfo | grep OpenGL:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.7.0-devel (git-e566e52 2015-06-29 vivid-oibaf-ppa)
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 10.7.0-devel (git-e566e52 2015-06-29 vivid-oibaf-ppa)
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 10.7.0-devel (git-e566e52 2015-06-29 vivid-oibaf-ppa)
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
OpenGL ES profile extensions:
and the following from my Qt 5.4 application for my CG class:
OpenGL Version: 3.0 Mesa 10.7.0-devel (git-e566e52 2015-06-29 vivid-oibaf-ppa)
GLSL Version: 1.30
so how one can use the Core Profile of OpenGL? I really need the GLSL 3.3.
I'm doing it inside a widget that's inherited from QOpenGLWidget like this:
QSurfaceFormat fmt;
fmt.setVersion( 3, 3 );
fmt.setProfile( QSurfaceFormat::CoreProfile );
setFormat( fmt );
QSurfaceFormat::setDefaultFormat( fmt );

AMD vs NVIDIA. How do they differentiate in terms of support of OpenCL?

I have an EC2 instance. It's specs are:
g2.2xlarge Instance.
Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
NVIDIA GRID GPU (Kepler GK104) with
Ubuntu 14.04 - 64 bit.
I have two questions:
1. After installing the CUDA toolkit on this system, I have the following output when using clinfo:
clinfo: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by clinfo)
Platform Version: OpenCL 1.2 CUDA 8.0.46
Platform Name: NVIDIA CUDA
Platform Vendor: NVIDIA Corporation
Number of devices: 1
Device Type: CL_DEVICE_TYPE_GPU
Name: GRID K520
Vendor: NVIDIA Corporation
Device OpenCL C version: OpenCL C 1.2
Driver version: 367.57
Profile: FULL_PROFILE
Version: OpenCL 1.2 CUDA
//with other info too which I can paste if required.
My question is: Is this a good practice? Will the GPU be used when coding with OpenCL?
2. Also what if I install the AMD APP SDK? Would that be able to use the Nvidia Graphic Card using OpenCL or will it use only the Intel CPU?
I installed the AMD APP SDK on another ec2 instance with the same specs and found out the following from clinfo:
Platform Version: OpenCL 1.2 AMD-APP (1214.3)
Platform Name: AMD Accelerated Parallel Processing
Platform Vendor: Advanced Micro Devices, Inc.
Number of devices: 1
Device Type: CL_DEVICE_TYPE_CPU
Name: Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
Vendor: GenuineIntel
Device OpenCL C version: OpenCL C 1.2
Driver version: 1214.3 (sse2,avx)
Profile: FULL_PROFILE
Version: OpenCL 1.2 AMD-APP (1214.3)
Does it mean that OpenCL will only be able to use the CPU. And the GPU (nVidia) will never be used? If this is the case, then should I turn back to the nVidia CUDA toolkit to make OpenCL use the GPU or is there some other way/better practise?
Out of the two, which one should I use? The CUDA tool kit? or the AMD APP SDK? My purpose is to run OpenCL cross-platform on all kinds of GPUs.
You can have multiple OpenCL platforms installed (NVIDIA CUDA, AMD APP, Intel). The actual OpenCL implementation for GPUs is part of the display driver. The AMD APP also comes with an OpenCL driver for the CPU (AMD or Intel).
Apple also has their own OpenCL platform, they should support the CPU and GPU installed in the machine.
So, if you want to use the NVIDIA GPU on your server, you probably should use the NVIDIA CUDA platform, although some have claimed that it should be possible to use AMD APP as well. In theory any platform should work as long as you have the NVIDIA OpenCL runtime driver installed.

Trying to get JavaFX 2.0 3D working on an intel 945

JavaFX 2.0 doesn't support 3D with my driver.
JavaFX beta build 40
OS: Windows Vista
graphics: Mobile Intel 945 Express Chipset Family
driver version: 7.14.10.1504 (current)
latest driver according to intel: 15.8.3.1504 on dl page, the file installs version 7.14.10.1504 (http://downloadcenter.intel.com/Detail_Desc.aspx?lang=eng&changeLang=true&DwnldId=16312)
I found this: "The minimum driver version for Intel HD was recently changed to 8.15.10.2302 to workaround bugs in older drivers." (https://forums.oracle.com/forums/thread.jspa?threadID=2255278)
Drivers with version >= 8.15.10.2302 are not availabe for the 945. Is there any workaround for this workaround, i.e. can I convince JavaFX to support 3D anyway? Any alternative drivers? Might using mesa help?
Looks like Windows 7 has the later updates for that graphics card. Have you tried installing a Windows 7 version and see if it works? There are applications that give your computer fake graphics card name so it can run games that wont run on non supported hardware. There is also probably graphics card hackers groups that create drivers for Intel using divers from later models (I know of ATI and Nividia ones).
I bet there also will be a flag for JavaFX to force support a graphics card. They have a flag for Java to do such a thing for Java2D. You might want to ask on the JavaFX official form.
There is not way to support Intel 945 Chipsets in JavaFX 2.0 Hardware Accelerated. The Problem is it needs Pixel Shader 3.0 to utilize hardware acceleration, but the Chipset only supports PS 2.0..
You can find more details on wy it not uses hardware acceleration with following commands:
set NWT_TRACE_LEVEL=4
java -Dprism.verbose=true

How to upgrade to Opengl 2.0 on linux (nouveau)?

I ran the python-kivy hello world test program, but got a blank screen. An error message warned:
[INFO ] [GL ] OpenGL version <1.5 Mesa 9.2.2>
[INFO ] [GL ] OpenGL vendor <nouveau>
[INFO ] [GL ] OpenGL renderer <Gallium 0.4 on NV31>
[INFO ] [GL ] OpenGL parsed version: 1, 5
[CRITICAL] [GL ] Minimum required OpenGL version (2.0) NOT found!
It seems I need to upgrade opengl.
Mesa 9.2.2 supports opengl 3.1 (http://www.mesa3d.org/relnotes/9.2.2.html). All the other opengl related packages are up to date, including freeglut3 (2.8.1-1). I think the issue might lie with my nouveau version. I have the 2.4.51-1 libdrm-nouveau2 and 1:1.0.10-1 xserver-xorg-video-nouveau packages installed, which are very recent versions. There doesn't look to be enough info on the nouveau homepage to work out which opengl version they support/implement.
I'm not really sure how to get opengl 2.0 running. Other answers on google elude to graphics driver implementations (?) of opengl, but are quite vague and cryptic about the details.
Nouveau is still in a very experimental stage. So far people usually go with the NVidia proprietary binary drivers, which you can download from NVidia's website; also there are packages for most distributions as well.
In case of a NV31 it's the only driver with OpenGL-2.0 support.

Resources