glDrawArrays Invalid Memory Access

Everything related to 3D programming
User avatar
Samuel
Enthusiast
Enthusiast
Posts: 755
Joined: Sun Jul 29, 2012 10:33 pm
Location: United States

glDrawArrays Invalid Memory Access

Post by Samuel »

I'm in the process of converting some of my C++ code to Purebasic, but I hit a snag pretty early with glDrawArrays giving a memory error (caused by my incompetence).

I converted the freeglut window to Purebasic's OpenglGadget() so then you guys won't have to download any external files.

If you can get the example working the window should have a red background with a single white pixel in the center.
Any help is appreciated.

Code: Select all

;-CONSTANTS
#GL_ARRAY_BUFFER = $8892
#GL_STATIC_DRAW = $88E4


;-STRUCTURES
Structure VertexData
  X.f
  Y.f
  Z.f
EndStructure
Dim Vertex.VertexData(10)
  Vertex(0)\X = 0.0
  Vertex(0)\Y = 0.0
  Vertex(0)\Z = 0.0 
  
  
;-DECLARES
Declare OGL_Render()


;-VARIABLES
Define.i argc = 0
  
Dim argv.s(0)
argv(0) = ""

Global.i VBO


;-MAIN WINDOW
OpenWindow(0, 0, 0, 800, 600, "Test", #PB_Window_SystemMenu | #PB_Window_ScreenCentered)
OpenGLGadget(0, 0, 0, 800, 600)

;IncludeFile "Functions.pbi"

CompilerIf (#PB_Compiler_Processor = #PB_Processor_x86)
  Import "Opengl32.lib"
    wglGetProcAddress_(s.p-ascii) As "_wglGetProcAddress@4"
  EndImport
CompilerElse   
  Import "Opengl32.lib"
    wglGetProcAddress_(s.p-ascii) As "wglGetProcAddress"
  EndImport
CompilerEndIf

Prototype PFNGLGENBUFFERSPROC(n.i, *buffers)
Global glGenBuffers.PFNGLGENBUFFERSPROC
glGenBuffers = wglGetProcAddress_("glGenBuffers")

Prototype PFNGLBINDBUFFERPROC(target.l, buffer.i)
Global glBindBuffer.PFNGLBINDBUFFERPROC
glBindBuffer = wglGetProcAddress_("glBindBuffer")

Prototype PFNGLBUFFERDATAPROC(target.l, size.i, *Data_, usage.l)
Global glBufferData.PFNGLBUFFERDATAPROC
glBufferData = wglGetProcAddress_("glBufferData")

Prototype PFNGLENABLEVERTEXATTRIBARRAYPROC(index.i)
Global glEnableVertexAttribArray.PFNGLENABLEVERTEXATTRIBARRAYPROC
glEnableVertexAttribArray = wglGetProcAddress_("glEnableVertexAttribArray")

Prototype PFNGLVERTEXATTRIBPOINTERPROC(index.i, size.i, type.l, normalized.b, stride.i, *pointer)
Global glVertexAttribPointer.PFNGLVERTEXATTRIBPOINTERPROC
glVertexAttribPointer = wglGetProcAddress_("glVertexAttribPointer")

Prototype PFNGLDISABLEVERTEXATTRIBARRAYPROC(index.i)
Global glDisableVertexAttribArray.PFNGLDISABLEVERTEXATTRIBARRAYPROC
glDisableVertexAttribArray = wglGetProcAddress_("glDisableVertexAttribArray")

Prototype PFNGLDRAWARRAYSPROC(mode.l, first.i, count.i)
Global glDrawArrays.PFNGLDRAWARRAYSPROC
glDrawArrays = wglGetProcAddress_("glDrawArrays")

;-Debug Return Values
Debug "glGenBuffers = " + Str(glGenBuffers)
Debug "glBindBuffer = " + Str(glBindBuffer)
Debug "glBufferData = " + Str(glBufferData)
Debug "glEnableVertexAttribArray = " + Str(glEnableVertexAttribArray)
Debug "glVertexAttribPointer = " + Str(glVertexAttribPointer)
Debug "glDisableVertexAttribArray = " + Str(glDisableVertexAttribArray)
Debug "glDrawArrays = " + Str(glDrawArrays)

glGenBuffers(1, @VBO)
glBindBuffer(#GL_ARRAY_BUFFER, VBO)
glBufferData(#GL_ARRAY_BUFFER, SizeOf(VertexData), @Vertex(0), #GL_STATIC_DRAW)
  
glClearColor_(1.0, 0.0, 0.0, 1.0)

;-MAIN REPEAT
Repeat
  
  Event = WindowEvent()
  
  OGL_Render()
  
Until Event = #PB_Event_CloseWindow

End

Procedure OGL_Render()
  
  SetGadgetAttribute(0, #PB_OpenGL_SetContext, #True)
  
  glClear_(#GL_COLOR_BUFFER_BIT)
  
  glEnableVertexAttribArray(0)
  glBindBuffer(#GL_ARRAY_BUFFER, VBO)
  glVertexAttribPointer(0, 3, #GL_FLOAT, #GL_FALSE, 0, 0)
    
  glDrawArrays(#GL_POINTS, 0, 1)

  glDisableVertexAttribArray(0)
  
  SetGadgetAttribute(0, #PB_OpenGL_FlipBuffers, #True)
  
EndProcedure
Last edited by Samuel on Fri Apr 24, 2015 5:39 pm, edited 1 time in total.
applePi
Addict
Addict
Posts: 1404
Joined: Sun Jun 25, 2006 7:28 pm

Re: glDrawArrays Invalid Memory Access

Post by applePi »

Hi Samuel, your code are working okay on my system, i have tried it on windows xp, and windows 7 on the same computer, no errors. and because my vision i have added
glPointSize_(20)
glColor3d_(0,1,0)
before the glDrawArrays(#GL_POINTS, 0, 1) line to see a big squared point
just a shoot in the dark, it is may be the VBO, the cache mem in the GPU, not sure. my vga card is Geforce GT 520, try your same code with another hardware
try the following code which use the glDrawArrays to plot random big points without the VBO usage
and yes i find OpenGLGadget the most easy way to test GL things
PS: also glDrawArrays are available with purebasic without defining it as in line 73, so try glDrawArrays_(...)

Code: Select all

OpenWindow(0, 10, 10, 640, 480, "OpenGL demo")
SetWindowColor(0, RGB(200,220,200))
OpenGLGadget(0, 20, 10, WindowWidth(0)-40 , WindowHeight(0)-20)

Procedure.f RandF(Min.f, Max.f, Resolution.i = 10000)
  ProcedureReturn (Min + (Max - Min) * Random(Resolution) / Resolution)
EndProcedure


Structure Pointq
  x.f
  y.f
  r.f
  g.f
  b.f
  a.f
EndStructure

Global Dim points.Pointq(500)


Declare display()


Repeat
   ;// populate points
  ;points.Pointq
  For i = 0 To 500
    
        points(i)\x = RandF(-50,50)
        points(i)\y = RandF(-50,50)
        points(i)\r = RandF(0,1)
        points(i)\g = RandF(0,1)
        points(i)\b = RandF(0,1)
        points(i)\a = RandF(0,1)
        
      Next
      
  display()
  
  SetGadgetAttribute(0, #PB_OpenGL_FlipBuffers, #True)
Until WindowEvent() = #PB_Event_CloseWindow


Procedure display()


glClear_(#GL_COLOR_BUFFER_BIT | #GL_DEPTH_BUFFER_BIT)

    glMatrixMode_(#GL_PROJECTION);
    glLoadIdentity_();
    ;glOrtho_(-50, 50, -50, 50, -1, 1);
    glOrtho_(-100, 100, -100, 100, -1, 1)

    glMatrixMode_(#GL_MODELVIEW);
    glLoadIdentity_();

    ;// draw
       
    glEnableClientState_(#GL_VERTEX_ARRAY )
    glEnableClientState_( #GL_COLOR_ARRAY );
    glVertexPointer_( 2, #GL_FLOAT, SizeOf(Pointq), @points(0)\x );
    glColorPointer_( 4, #GL_UNSIGNED_BYTE, SizeOf(Pointq), @points(0)\r );
    glPointSize_( 10.0 );
    glDrawArrays_( #GL_POINTS, 0, 500 );
    ;glDrawArrays_(#GL_TRIANGLES, 0, 20);
    glDisableClientState_( #GL_VERTEX_ARRAY );
    glDisableClientState_( #GL_COLOR_ARRAY );
    
  EndProcedure
  
edit: i have changed the points(i)\r (and sisters) = Random(255) to = RandF(0,1)
also the variable in the structure r.c (and sisters) to r.f . seems to me more appropriate
Last edited by applePi on Fri Apr 24, 2015 4:55 pm, edited 2 times in total.
bmon
User
User
Posts: 54
Joined: Sat May 24, 2008 8:51 pm
Location: U.S.

Re: glDrawArrays Invalid Memory Access

Post by bmon »

Hello Samuel ... Just to add some more confusion to the mix, I also get a IMA for the line 75 ... "glGenBuffers(1, @VBO)" for your code. I did find ApplePi's code to work properly though. I have a super old graphics card using a GeForce MX440 with OpenGL version 1.8.2. Hope this helps ... Bruce
User avatar
Samuel
Enthusiast
Enthusiast
Posts: 755
Joined: Sun Jul 29, 2012 10:33 pm
Location: United States

Re: glDrawArrays Invalid Memory Access

Post by Samuel »

Thanks for testing, applePi. I tried changing the glDrawArrays() in my code to glDrawArrays_() and everything works fine.
The strange thing is you said my original code with the glDrawArrays() runs fine on your computer.

Makes me wonder if I'm collecting the address of glDrawArrays() correctly.
It might also be hardware related like you said. I have an ATI FireGL V7600 card and If I remember correctly you have a Nividia card.
One guess is that I'm not correctly getting glDrawArrays() and your card is fixing my mistake while my card is just trying to run it as is which causes the memory error.

I guess some testing is in order to try and figure this out. I'll give an update if I find anything.


@bmon
Sorry, I forgot to mention the Opengl requirements. My C++ code was based on 3.3, but if I remember correctly my above test should be able to run on a card with Opengl 2.0 or higher.
User avatar
Samuel
Enthusiast
Enthusiast
Posts: 755
Joined: Sun Jul 29, 2012 10:33 pm
Location: United States

Re: glDrawArrays Invalid Memory Access

Post by Samuel »

After some testing I noticed glDrawArrays is returning 0 in my code.
I updated my original code so that it now debugs the values after collecting the addresses. Is anyone else getting a null value with glDrawArrays or is it just me?

I'm wondering what I'm doing wrong here as I never had any issues with glDrawArrays in C++. Any tips or ideas would be very much appreciated.

In the meantime I guess I'll go back to testing.
User avatar
luis
Addict
Addict
Posts: 3876
Joined: Wed Aug 31, 2005 11:09 pm
Location: Italy

Re: glDrawArrays Invalid Memory Access

Post by luis »

bmon wrote:GeForce MX440 with OpenGL version 1.8.2
That version of OpenGL does not exist, that number must be something else. :)
http://en.wikipedia.org/wiki/OpenGL

[quote=""Samuel"]Makes me wonder if I'm collecting the address of glDrawArrays() correctly.[/quote]

Does it work this way ? (it should)

Code: Select all

CompilerIf (#PB_Compiler_Processor = #PB_Processor_x64)

Import "Opengl32.lib" ; x64
 glDrawArrays(a.i,b.i,c.i) As "glDrawArrays"
EndImport

CompilerElse    

Import "Opengl32.lib" ; x86
 glDrawArrays(a.i,b.i,c.i) As "_glDrawArrays@12"
EndImport

CompilerEndIf
[quote=""Samuel"]I'm wondering what I'm doing wrong here as I never had any issues with glDrawArrays in C++.[/quote]

How did you import it there ?


Some notes:

1) You should never use wglGetProcAddress() for OpenGL 1.1 functions, use import from the OpenGL library instead.

2) You should use wglGetProcAddress() for 1.3 functions and up, and any extensions. 1.2 and 1.2.1 are a sort of gray area, you should be able to use wglGetProcAddress() for them too (1.2 just add the imaging subset, not much used, so most people simply ignore).

3) Some drivers MAY work using wglGetProcAddress() for OpenGL 1.1 functions, but there is no guarantee of that.

4) Using wglGetProcAddress() and test for the returned pointer works only under Windows. Under other OSes you can get a pointer even if the function is not actually available, the reason is long to be explained here, suffice to say the time of the binding is different due to architectural differences (X11 under linux for example) and the fact you can request their address without having a RC yet. So testing the pointer is not a good idea for crossplatform programming.

5) Unless you are using a specific OpenGL version with a CORE profile where you can rely on having all the core functions available, when you use any GL function above 1.2 in a compatibility profile (like PB does) or a plain old legacy one (<= 2.1) you should always test for the presence of the extension and only then use the commands provided by that (see point 4).
Testing for the OpenGL version may be ok for a broader test but often things are available as extensions in drivers long before they become core in a higher version.

Yes, it's complicated and time consuming but that's the right way to do it. The codes you see here in the forum are ok for informal testing but they would give infinite problems in the real world (typically silently fail or crash with IMAs on some configurations).

Nothing wrong with that as long one keep that in mind for a 'real' release.
Last edited by luis on Sat Apr 25, 2015 1:47 am, edited 3 times in total.
"Have you tried turning it off and on again ?"
A little PureBasic review
User avatar
ts-soft
Always Here
Always Here
Posts: 5756
Joined: Thu Jun 24, 2004 2:44 pm
Location: Berlin - Germany

Re: glDrawArrays Invalid Memory Access

Post by ts-soft »

The code from first thread run's without any error! Red window with a white pixel.

nVidia Geforce 750 ti
OpenGL Version: 4.5.0 NVIDIA 350.12
PureBasic 5.73 | SpiderBasic 2.30 | Windows 10 Pro (x64) | Linux Mint 20.1 (x64)
Old bugs good, new bugs bad! Updates are evil: might fix old bugs and introduce no new ones.
Image
User avatar
luis
Addict
Addict
Posts: 3876
Joined: Wed Aug 31, 2005 11:09 pm
Location: Italy

Re: glDrawArrays Invalid Memory Access

Post by luis »

ts-soft wrote:The code from first thread run's without any error! Red window with a white pixel.
Yep, here too (similar graphic card from nVidia) but it's somewhat wrong (I edited my previous post to add some explanations).
"Have you tried turning it off and on again ?"
A little PureBasic review
User avatar
Samuel
Enthusiast
Enthusiast
Posts: 755
Joined: Sun Jul 29, 2012 10:33 pm
Location: United States

Re: glDrawArrays Invalid Memory Access

Post by Samuel »

Thank you luis for all the information.
Your notes on how to use wglGetProcAddress are very helpful. In C++ I used external libraries like glew which made things a lot easier.

If you don't mind I also have one quick question for you.
On the other platforms what would be the best way of testing if a function is available for use if I can't rely on the functions return values?
I'm currently only working with windows, but I plan on using other OS in the near future.

Thanks again for that helpful information.
applePi
Addict
Addict
Posts: 1404
Joined: Sun Jun 25, 2006 7:28 pm

Re: glDrawArrays Invalid Memory Access

Post by applePi »

hope will not interrupt Samuel post. i read years ago that some cards bypass small errors (may be have some A.I.). i wish to see if this approach works with Samuel computer, since it uses the opengl the old way (ie without opengl gadget like the opengl cube.pb in the PB examples). it is from russian site http://purebasic.info/phpBB3ex/viewtopi ... 74&p=72161 . i change it a little, just delete '_" from glDrawArrays_ . it works with me with glDrawArrays_ or glDrawArrays.
i have added #GL_TRIANGLE_STRIP = ... from OpenGL 4 Include files by luis : http://purebasic.fr/english/viewtopic.p ... 87#p348383
to use the glDrawElements to draw what the indices refer to some points just uncomment its line and comment glDrawArrays line

Code: Select all

Structure TVertex
  x.f
  y.f
  z.f
EndStructure
Enumeration
  #Window_0
EndEnumeration
#GL_COLOR_BUFFER_BIT                = $00004000
#GL_DEPTH_BUFFER_BIT                = $00000100
#GL_ARRAY_BUFFER                    = $8892
#GL_ELEMENT_ARRAY_BUFFER            = $8893
#GL_MODELVIEW                       = $1700
#GL_PROJECTION                      = $1701
#GL_SMOOTH                          = $1D01
#GL_DEPTH_TEST                      = $0B71
#GL_CULL_FACE                       = $0B44
#GL_STATIC_DRAW                     = $88E4
#GL_VERTEX_ARRAY                    = $8074
#GL_FLOAT                           = $1406
#GL_TRIANGLES                       = $0004
#GL_TRIANGLE_STRIP                  = $0005
#GL_TRIANGLE_FAN                    = $0006
#GL_UNSIGNED_BYTE                   = $1401
#GL_UNSIGNED_SHORT                  = $1403
#GL_UNSIGNED_INT                    = $1405
 
Global pfd.PIXELFORMATDESCRIPTOR
 
Procedure HandleError (Result, Text$)
  If Result = 0
    MessageRequester("Error", Text$, 0)
    End
  EndIf
EndProcedure
 
Procedure OGL_PB(Width,Height)
Global hWnd = OpenWindow(#Window_0, 0, 0, Width, Height, "PB_OGL",  #PB_Window_SystemMenu|#PB_Window_ScreenCentered )
Global hdc = GetDC_(hWnd)       
pfd\nSize        = SizeOf(PIXELFORMATDESCRIPTOR)
pfd\nVersion     = 1
pfd\dwFlags      = #PFD_SUPPORT_OPENGL | #PFD_DOUBLEBUFFER | #PFD_DRAW_TO_WINDOW
pfd\dwLayerMask  = #PFD_MAIN_PLANE
pfd\iPixelType   = #PFD_TYPE_RGBA
pfd\cColorBits   = 24
pfd\cDepthBits   = 24 
pixformat = ChoosePixelFormat_(hdc, pfd)
HandleError( SetPixelFormat_(hdc, pixformat, pfd), "SetPixelFormat()")
hrc = wglCreateContext_(hdc)
HandleError( wglMakeCurrent_(hdc,hrc), "vglMakeCurrent()") 
EndProcedure
 
OGL_PB(800,600)
Prototype PFNGLGENBUFFERSPROC ( n.i, *buffers)
Global glGenBuffers.PFNGLGENBUFFERSPROC
glGenBuffers = wglGetProcAddress_( "glGenBuffers" )
Prototype PFNGLBINDBUFFERPROC ( target.l, buffer.i)
Global glBindBuffer.PFNGLBINDBUFFERPROC
glBindBuffer = wglGetProcAddress_( "glBindBuffer" )
Prototype PFNGLBUFFERDATAPROC ( target.l, size.i, *Data_, usage.l)
Global glBufferData.PFNGLBUFFERDATAPROC
glBufferData = wglGetProcAddress_( "glBufferData" )

Prototype PFNGLDRAWARRAYSPROC(mode.l, first.i, count.i)
Global glDrawArrays.PFNGLDRAWARRAYSPROC
glDrawArrays = wglGetProcAddress_("glDrawArrays")
 
glMatrixMode_(#GL_PROJECTION)
glLoadIdentity_();
gluPerspective_(45.0, 800/600, 1.0, 60.0)
glMatrixMode_(#GL_MODELVIEW)
glTranslatef_(0, 0, -4)
glShadeModel_(#GL_SMOOTH) 
glEnable_(#GL_DEPTH_TEST)
glEnable_(#GL_CULL_FACE)     
glViewport_(0, 0, 800, 600)
 
Global BuffId.i,iiId.i
Dim Vertex.TVertex(5)
Vertex(0)\x = 1
Vertex(0)\y = -1
Vertex(0)\z = 0
 
Vertex(1)\x = -1
Vertex(1)\y = -1
Vertex(1)\z = 0
 
Vertex(2)\x = 1
Vertex(2)\y = 1
Vertex(2)\z = 0

Vertex(2)\x = 1
Vertex(2)\y = 1
Vertex(2)\z = 0

Vertex(1)\x = -1
Vertex(1)\y = -1
Vertex(1)\z = 0
 
Vertex(3)\x = -1
Vertex(3)\y = 1
Vertex(3)\z = 0
;=================================================================================
;==============??? ??????????? ?????? )))=========================================
Dim index.u(5)
index(0) = 1 
index(1) = 0
index(2) = 2
index(3) = 3
index(4) = 1
index(5) = 2
 
;indexsize = 6 ;#GL_UNSIGNED_BYTE
indexsize = 6*2 ;#GL_UNSIGNED_SHORT
;indexsize = 6*4 ;#GL_UNSIGNED_INT
 
glGenBuffers( 1, @BuffId )
glBindBuffer(#GL_ARRAY_BUFFER, BuffId )
glBufferData(#GL_ARRAY_BUFFER,SizeOf(TVertex)*4,@Vertex(0), #GL_STATIC_DRAW)
glBindBuffer(#GL_ARRAY_BUFFER,0);
glGenBuffers( 1, @iiId );
glBindBuffer(#GL_ELEMENT_ARRAY_BUFFER, iiId);
glBufferData(#GL_ELEMENT_ARRAY_BUFFER, indexsize,@index(0),#GL_STATIC_DRAW);
glBindBuffer(#GL_ELEMENT_ARRAY_BUFFER, 0);
 
Global rot.f = 1 
glDisable_(#GL_CULL_FACE) ; to see the front and back faces
glColor3f_(1.0, 0.5, 0.0)
Repeat
    Event = WindowEvent()
    Select Event
      Case #PB_Event_CloseWindow
        Quit = 1      
    EndSelect
    ;glClearColor_(0.2, 0.2, 0.2, 1)
    glClearColor_(0.2, 0.5, 0.2, 1)
  
    glClear_(#GL_COLOR_BUFFER_BIT | #GL_DEPTH_BUFFER_BIT)
    
    glEnableClientState_(#GL_VERTEX_ARRAY )
    glBindBuffer(#GL_ARRAY_BUFFER, BuffId)
    glVertexPointer_(3, #GL_FLOAT,0,0)
 
    
    glBindBuffer(#GL_ELEMENT_ARRAY_BUFFER, iiId)
    
    glRotatef_(rot.f, 0, 1, 0);
    ;glDrawElements_(#GL_TRIANGLES,indexsize,#GL_UNSIGNED_SHORT,0) ; draw a square using indices
    glDrawArrays_(#GL_TRIANGLE_STRIP, 0, 6) ; will draw a triangle
    glBindBuffer(#GL_ELEMENT_ARRAY_BUFFER, 0);
    glBindBuffer(#GL_ARRAY_BUFFER,0);
    glDisableClientState_(#GL_VERTEX_ARRAY);
 
    SwapBuffers_(hdc)
    Delay(16)
  Until Quit = 1
End
the following is the above example but with opengl gadget, and it display big points :

Code: Select all

Structure TVertex
  x.f
  y.f
  z.f
EndStructure

#GL_COLOR_BUFFER_BIT                = $00004000
#GL_DEPTH_BUFFER_BIT                = $00000100
#GL_ARRAY_BUFFER                    = $8892
#GL_ELEMENT_ARRAY_BUFFER            = $8893
#GL_MODELVIEW                       = $1700
#GL_PROJECTION                      = $1701
#GL_SMOOTH                          = $1D01
#GL_DEPTH_TEST                      = $0B71
#GL_CULL_FACE                       = $0B44
#GL_STATIC_DRAW                     = $88E4
#GL_VERTEX_ARRAY                    = $8074
#GL_FLOAT                           = $1406
#GL_TRIANGLES                       = $0004
#GL_UNSIGNED_BYTE                   = $1401
#GL_UNSIGNED_SHORT                  = $1403
#GL_UNSIGNED_INT                    = $1405


OpenWindow(0, 10, 10, 640, 480, "OpenGL demo")
SetWindowColor(0, RGB(200,220,200))
OpenGLGadget(0, 20, 10, WindowWidth(0)-40 , WindowHeight(0)-20)

CompilerIf (#PB_Compiler_Processor = #PB_Processor_x86)
  Import "Opengl32.lib"
    wglGetProcAddress_(s.p-ascii) As "_wglGetProcAddress@4"
  EndImport
CompilerElse   
  Import "Opengl32.lib"
    wglGetProcAddress_(s.p-ascii) As "wglGetProcAddress"
  EndImport
CompilerEndIf

Prototype PFNGLGENBUFFERSPROC ( n.i, *buffers)
Global glGenBuffers.PFNGLGENBUFFERSPROC
glGenBuffers = wglGetProcAddress_( "glGenBuffers" )
Prototype PFNGLBINDBUFFERPROC ( target.l, buffer.i)
Global glBindBuffer.PFNGLBINDBUFFERPROC
glBindBuffer = wglGetProcAddress_( "glBindBuffer" )
Prototype PFNGLBUFFERDATAPROC ( target.l, size.i, *Data_, usage.l)
Global glBufferData.PFNGLBUFFERDATAPROC
glBufferData = wglGetProcAddress_( "glBufferData" )

Prototype PFNGLDRAWARRAYSPROC(mode.l, first.i, count.i)
Global glDrawArrays.PFNGLDRAWARRAYSPROC
glDrawArrays = wglGetProcAddress_("glDrawArrays")

glMatrixMode_(#GL_PROJECTION)
glLoadIdentity_();
gluPerspective_(45.0, 800/600, 1.0, 60.0)
glMatrixMode_(#GL_MODELVIEW)
glTranslatef_(0, 0, -5)
glShadeModel_(#GL_SMOOTH) 
glEnable_(#GL_DEPTH_TEST)
glEnable_(#GL_CULL_FACE) 
glColor3f_(1.0, 0.7, 0.0)
glViewport_(0, 0, 800, 600)

Global BuffId.i,iiId.i
Dim Vertex.TVertex(3)
Vertex(0)\x = 1
Vertex(0)\y = -1
Vertex(0)\z = 0

Vertex(1)\x = -1
Vertex(1)\y = -1
Vertex(1)\z = 0

Vertex(2)\x = 1
Vertex(2)\y = 1
Vertex(2)\z = 0

Vertex(3)\x = -1
Vertex(3)\y = 1
Vertex(3)\z = 0
;=================================================================================
;=================================================================================
Dim index.l(5)
index(0) = 1 
index(1) = 0
index(2) = 2
index(3) = 3
index(4) = 1
index(5) = 2


indexsize = 6*2 ;#GL_UNSIGNED_SHORT
indexsize = 6*4 ;#GL_UNSIGNED_INT

glGenBuffers( 1, @BuffId )
glBindBuffer(#GL_ARRAY_BUFFER, BuffId )
glBufferData(#GL_ARRAY_BUFFER,SizeOf(TVertex)*4,@Vertex(0), #GL_STATIC_DRAW)
glBindBuffer(#GL_ARRAY_BUFFER,0);
glGenBuffers( 1, @iiId );
glBindBuffer(#GL_ELEMENT_ARRAY_BUFFER, iiId);
glBufferData(#GL_ELEMENT_ARRAY_BUFFER, indexsize,@index(0),#GL_STATIC_DRAW);
glBindBuffer(#GL_ELEMENT_ARRAY_BUFFER, 0);

glViewport_(0, 0, WindowWidth(0), WindowHeight(0))
Global rot.f = 1
glDisable_(#GL_CULL_FACE) ; to see the front and back faces
Repeat
  event = WindowEvent()
    If event = #PB_Event_CloseWindow
      quit = #True
    EndIf
    
  ;glViewport_(0, 0, WindowWidth(0), WindowHeight(0))
  glClearColor_(0.2, 0.9, 0.2, 1)
  glClear_(#GL_COLOR_BUFFER_BIT | #GL_DEPTH_BUFFER_BIT)
    
  glEnableClientState_(#GL_VERTEX_ARRAY )
  glBindBuffer(#GL_ARRAY_BUFFER, BuffId)
  glVertexPointer_(3, #GL_FLOAT,0,0)
    
  glBindBuffer(#GL_ELEMENT_ARRAY_BUFFER, iiId)
  glPointSize_(20)
  glColor3d_(1,0,0)
  glRotatef_(rot.f, 0, 1, 0);
  ;glDrawElements_(#GL_POINTS,indexsize,#GL_UNSIGNED_INT,0)
  glDrawArrays_(#GL_POINTS, 0, 4)
  glBindBuffer(#GL_ELEMENT_ARRAY_BUFFER, 0);
  glBindBuffer(#GL_ARRAY_BUFFER,0);
  glDisableClientState_(#GL_VERTEX_ARRAY);

  SetGadgetAttribute(0, #PB_OpenGL_FlipBuffers, #True)
Until event = #PB_Event_CloseWindow
User avatar
Samuel
Enthusiast
Enthusiast
Posts: 755
Joined: Sun Jul 29, 2012 10:33 pm
Location: United States

Re: glDrawArrays Invalid Memory Access

Post by Samuel »

ApplePi, your examples cause the same memory error when I switch glDrawArrays_() to glDrawArrays().
Luis said that some drivers may work using wglGetProcAddress() for OpenGL 1.1 functions, but there is no guarantee of that.
He said it is better to use the Opengl32.lib for any older functions like glDrawArrays.

I forgot to mention in my last post that importing glDrawArrays from Opengl32.lib solves my problem.

Luis's code.

Code: Select all

CompilerIf (#PB_Compiler_Processor = #PB_Processor_x64)

  Import "Opengl32.lib" ; x64
    glDrawArrays(a.i,b.i,c.i) As "glDrawArrays"
  EndImport

CompilerElse   

  Import "Opengl32.lib" ; x86
    glDrawArrays(a.i,b.i,c.i) As "_glDrawArrays@12"
  EndImport

CompilerEndIf
User avatar
luis
Addict
Addict
Posts: 3876
Joined: Wed Aug 31, 2005 11:09 pm
Location: Italy

Re: glDrawArrays Invalid Memory Access

Post by luis »

Samuel wrote:Thank you luis for all the information.
You are wellllllcome.
Samuel wrote: I forgot to mention in my last post that importing glDrawArrays from Opengl32.lib solves my problem.
Good. I noticed something was missing :)
Samuel wrote: On the other platforms what would be the best way of testing if a function is available for use if I can't rely on the functions return values?
Actually I've already answered that, the right thing to do on any platform is:
5) Unless you are using a specific OpenGL version with a CORE profile where you can rely on having all the core functions available, when you use any GL function above 1.2 in a compatibility profile (like PB does) or a plain old legacy one (<= 2.1) you should always test for the presence of the extension and only then use the commands provided by that (see point 4).
Testing for the OpenGL version may be ok for a broader test but often things are available as extensions in drivers long before they become core in a higher version.
I'll try to explain it better.
I'll tell you even something extra not strictly pertinent to your question so maybe this post can be useful to other people since with all the types of rendering contexts nowadays available in OpenGL this is often cause of confusion.

I'm talking mainly Windows only here, on other OSes the principle is more or less the same anyway, some stuff by other names. In Win you use wgl_ functions, in linux you'll have similar glx_ functions, in Win you look for a certain "pixel format" in linux the equivalent is a "visual", etc.

In OpenGL one of the first things you have to decide is the type of RC you want to work with.

There are three types, two "official" and one I call "legacy". Let's start with the last one, the oldest type available.

The legacy RC.
This is the one you get when you ask for a RC to a GL driver supporting only 2.1 or lower.
A driver <= 2.1 does not know of any other type of RC, and can give you only this type of RC.
This is the oldest type, with which you can use all legacy GL commands from 1.1 up to <= 2.1.
Shaders are supported only in version 2.0 and 2.1.

If I were to target an absolute minimum for a new software I would ask the user to have a driver supporting GL 2.1.
So I could have a reasonable modern GL implementation, I could use shaders, I would have a lot of powerful commands previously supported only as extensions now promoted to core functions in 2.1 and I would need only a very limited number of extensions (probably 2 or 3).
One would certainly be ARB_FrameBuffer_object which is usually available without problems on 2.1 hardware.

How does it work:
In Windows you select a pixel format supporting the features you need (double buffer, depth buffer size, etc.),
then you ask for the RC with wglCreateContext_(hDC),
and then you make the context current with wglMakeCurrent_(hDC, hRC).
When you ask for the RC, you get a RC associated to the highest version of OpenGL supported by that driver.
So if the drivers supports GL 1.5, you get that. If the drivers supports GL 2.1, you get that.
Simple.

The RC for the CORE PROFILE.
This is the one you get when the GL driver is recent enough and you ask explicitly for a precise version and for a CORE PROFILE.
Modern GL is considered anything above 3.0, but for all practical purposes you can consider only 3.2 or higher.
If you want to do CORE GL programming, asks for 3.2 or higher when requesting a RC.
3.0 - 3.1 are a transitional phase no one wants to talk about, where things were behaving differently and those versions had a very short life.
You'll be hard pressed to find in real life a GL driver supporting only 3.0, 3.1. Don't care about them is my suggestion.
Just go for 3.2 or higher.
In the CORE PROFILE, there is a list of deprecated GL commands from the previous versions (almost all come from the 1.1 version really) which are not usable anymore.
You can only use modern GL commands and you can't mix legacy commands with the modern ones.
Also practically all work is done through shaders, and the more traditional GL commands are used mostly to prepare the stage and supply the data.

How does it work:
First you have to ask for a legacy RC (see above), then you use that to import the extension WGL_ARB_create_context_profile and with that at hand you
ask for another RC, asking for a CORE PROFILE and a specific GL version (>= 3.2).
If WGL_ARB_create_context_profile is not available the driver does not support this mode and you can fallback to use the legacy GL context you already have, which will support at max GL 2.1.
If the extension is present, you will get a RC supporting the AT LEAST (usually EXACTLY) the version you are asking for, or NULL if the version is not supported.

BTW: to know which version of GL your RC does support you use

Code: Select all

PeekS(glGetString_(#GL_VERSION),-1,#PB_Ascii)) 
and parse the string to extract the version (legacy context only) and use instead

Code: Select all

glGetIntegerv_(#GL_MAJOR_VERSION, @Major)
glGetIntegerv_(#GL_MINOR_VERSION, @Minor)
for a modern CORE PROFILE context.

The RC for the COMPATIBILITY PROFILE
This is the one you get when the GL driver is recent enough (>= 3.2) and:
- you ask explicitly for it: you specify you want a COMPATIBILITY PROFILE instead of a CORE PROFILE by using again WGL_ARB_create_context_profile, asking for an OpenGL version equal to 1.0.
- you ask for a RC like you did for GL <= 2.1 without specify nothing more: you just use wglCreateContext_(hDC) and stop there (this is equivalent to the above steps).

The COMPATIBILITY PROFILE is *OPTIONAL*.
Not all vendors support it.
When it is available, you get the highest GL version supporting it.
You may get an higher GL version asking for a CORE PROFILE.
Sometimes you may get the same highest possible version in both ways (nVidia for example does that and expressed the will to continue to do so).

Real life: afaik INTEL, ATI, NVIDIA all support it, at least on Win and Linux.

Example: you have a nVidia GL 4.3 driver.
You ask for a legacy RC, you get a COMPATIBILITY PROFILE (see above) supporting GL 4.3.
You ask for a COMPATIBILITY PROFILE, you get a COMPATIBILITY PROFILE supporting GL 4.3.
You ask for a CORE PROFILE for the 4.1 version, you get a CORE PROFILE supporting GL 4.1.
You ask for a CORE PROFILE for the 4.5 version, you get NULL.

In OSX, because Apple is what it is, supposing the driver is supporting GL 4.3 again:
You ask for a legacy RC, you get a legacy RC supporting GL 2.1.
You ask for a COMPATIBILITY PROFILE, you get a legacy RC supporting GL 2.1.
You ask for a CORE PROFILE for the 4.1 version, you get a CORE PROFILE supporting GL 4.1.
You ask for a CORE PROFILE for the 4.5 version, you get NULL.

In short, in OSX you get only GL 2.1 when you are note using a CORE PROFILE.

What you do with a COMPATIBILITY PROFILE ? You can use and mix all the GL commands, CORE and legacy, deprecated or not up to the returned GL version.
The result of all this mix and match is up to you.

Back to your question:
Samuel wrote: On the other platforms what would be the best way of testing if a function is available for use if I can't rely on the functions return values?
On all platforms:

If you are using a CORE PROFILE, and so a GL version > 3.2 you can joyfully use all the functions declared as CORE.
Let's suppose you asked for a 4.0 CORE.
You can use a lot of functions: all the GL commands up to 2.1 (minus the deprecated ones) and all the new commands available in 3.0, 3.1, 3.2. 3.3, 4.0.
You very rarely will need an extension with all that stuff available.
If you do, you'll have to check if the extension is available by collecting and then query the list of the available extensions using

Code: Select all

glGetIntegerv_(#GL_NUM_EXTENSIONS, @Count) 
and then a loop from 0 to Count - 1

Code: Select all

PeekS(glGetStringi_(#GL_EXTENSIONS, i), -1, #PB_Ascii)
If the extension is available, you get the address, use a prototype, set the global var to the address, and use it.

If you are using a COMPATIBILITY PROFILE, and so a GL version from 1.1 to the highest supported by the driver in compatibility mode (potentially all the way to the highest) you can use everything.
Legacy, core only, deprecated, and all the supported extensions.
Generally when you ask for a COMPATIBILITY PROFILE, you ask for the highest possible (by conventionally specifying 1.0 as the version), because you want to access all the possible modern stuff while still writing legacy style code occasionally spiced up.
So you just use wglCreateContext_(hDC) and USUALLY (99% of the times) if the driver support a COMPATIBILITY PROFILE, it will returns just that with the highest possible GL version you can use with it.
If you need extensions, again you just test for them, and if available you can use them as seen in the previous example.
The way to retrieve the list of supported extensions differs from legacy and modern mode.
What we saw above was the modern one, the other is discussed below in the legacy section.
In COMPATIBILITY mode you can use actually both, but since probably your code is written legacy style, it's straightforward to use the legacy mode, but both will work.

If you are using a legacy RC, and so a GL version between 1.1 and 2.1, you can use all the functions available up to that version, and all the extensions supported by your driver.
BTW: try to use ARB extensions when more then one type (EXT, SGI, etc.) is available, they are the "standard" way that set of commands will be implemented in higher version as a core functionality.
Again, to use those "recent" or "modern" commands, available only as extensions, you have to check if the extension if present. If it is, same story: you get the address, use a prototype, set the global var to the address, and use it.
To check for extension in legacy mode you have to collect and then query the list of the available extensions using

Code: Select all

*p = glGetString_(#GL_EXTENSIONS)
Buffer$ = PeekS(*p,-1,#PB_Ascii)
and then split the string to the list of the extension names the call returned all concatenated in one string (space separated).

NOTE: I suppose PB simply ask for a legacy RC, so if your driver is 4.10 and supports a COMPATIBILITY PROFILE all the way up to 4.10, that's why you get that and it's thank to that (the driver implementation) you can use all OpenGL commands up to 4.10. A driver could legitimately return only a 2.1 version even when supporting OpenGL 4.5 if it does not include a COMPATIBILITY PROFILE.

Some useful links:

OpenGL version history -> http://en.wikipedia.org/wiki/OpenGL
about extensions -> https://www.opengl.org/wiki/OpenGL_Extension
extensions registry -> https://www.opengl.org/registry/
context creation -> https://www.opengl.org/wiki/Creating_an ... _%28WGL%29
deprecated, what and when -> https://www.opengl.org/registry/oldspecs/gl.spec

Hope this helps.
"Have you tried turning it off and on again ?"
A little PureBasic review
User avatar
Samuel
Enthusiast
Enthusiast
Posts: 755
Joined: Sun Jul 29, 2012 10:33 pm
Location: United States

Re: glDrawArrays Invalid Memory Access

Post by Samuel »

Wow! Thanks, Luis.
That's a brick of a post, but it's well worth the read.
In OSX, because Apple is what it is, supposing the driver is supporting GL 4.3 again:
You ask for a legacy RC, you get a legacy RC supporting GL 2.1.
You ask for a COMPATIBILITY PROFILE, you get a legacy RC supporting GL 2.1.
What in the world is Apple doing........
User avatar
luis
Addict
Addict
Posts: 3876
Joined: Wed Aug 31, 2005 11:09 pm
Location: Italy

Re: glDrawArrays Invalid Memory Access

Post by luis »

Samuel wrote: What in the world is Apple doing........
Eh, they decided to simplify the driver and freeze the 2.1 stuff and concentrate only on CORE PROFILE for modern versions.
It's a lot easier to make all work as intended if you keep the two models separated.
Apple constantly demonstrated its will to cut stuff away when it doesn't like something anymore and simply say "rewrite your stuff, it's cooler this way" so it's not a surprise.
Actually this specific simplification is probably the most understandable in a long series of similar decision, and there is nothing to rewrite this time, it's just less convenient.
One I didn't like a bit instead was the removal (as in deprecated) of all the AGL_ stuff -> https://developer.apple.com/library/mac ... index.html

Yet the other major vendors are making an effort, at least for now.

Read if you like "OpenGL 4.5 on NVIDIA Hardware FAQ"
https://developer.nvidia.com/opengl-driver

So nVidia (and even AMD/ATI I believe, I think I read something similar) clearly stated they want to maintain the COMPATIBILITY PROFILE alive.

In any case, the legacy 2.1 will probably remain everywhere for many years to come (knock on wood) even if the COMPATIBILITY PROFILE were to be dropped by others beyond Apple.

It would be nice for the OpenGLGadget to support also a user specified context creation, with some flags similar to how GLFW implemented it for example -> http://www.glfw.org/docs/latest/window. ... _hints_ctx

It's simple enough (so to speak) and it works well.
"Have you tried turning it off and on again ?"
A little PureBasic review
User avatar
Samuel
Enthusiast
Enthusiast
Posts: 755
Joined: Sun Jul 29, 2012 10:33 pm
Location: United States

Re: glDrawArrays Invalid Memory Access

Post by Samuel »

It is great to hear that Nvidia (and hopefully AMD) are willing to go that extra mile for developers.
If down the road they have to drop the Compatibility Profile. It's at least nice to know that they tried to keep it available for as long as possible.
It would be nice for the OpenGLGadget to support also a user specified context creation, with some flags similar to how GLFW implemented it.
I agree this would be a nice feature especially after hearing how OSX deals with the Profiles.
Post Reply