@g_truc @matiasgoldberg @daniel_collin @aras_p Yeah, but your hammer is coming in a box labeled screwdriver.
Seriously, why is there even an option to discard and map a dynamic buffer if everyone is going to implement it as a pipeline stall? The whole point of such a call is to allow the implementation to either orphan or ping pong the backing buffer.
Having that API stall the pipeline is not something anybody ever wants. There is no reason whatsoever to use such an API. Either it shouldn't be there, or it should be fast. OpenGL lets me provide a hint that "Yeah, I'm going to modify this a lot" and then it shrugs its shoulders and ignores me.
OpenGL's API provides all sorts of high level looking objects: vertex buffers (of which I can bind multiple at once), textures, uniform buffers, etc. It provides convinient APIs for allocating multiples of these at once. The default usage of all of these, as implied by the documentation, is to allocate one texture object per texture, one vertex buffer object per vertex buffer, etc.
Nowhere does it suggest that actually if I do that then my processor is going to spend all of its time switching between them and my GPU is going to sit there looking bored. Nowhere does it say that, actually, you need to manually manage your memory.
Let us not forget that it only gave us the ability to properly manage our texture memory a year ago! (ARB_sparse_texture/GL4.3)
There is no *good* reason that
bind texture 0
set texture 1
should be so much slower than
bind texture 0
draw (using subtexture 0)
draw (using subtexture 1)
Pretty much the entire resason for that slowness is the complexity of the OpenGL specification mandated by ridiculous levels of backwards compatibility with designs which don't fit modern hardware.
So, actually, I'll admit that I did make an error in my initial statement: OpenGL is a high level graphics API. Its' just, its a pretty bad one, that you have to instead use as a portal to access the mercifully available low level graphics API which just so happens to be embedded within it.
If OpenGL is going to make me do all of my memory management myself, that's fine. I can live with that. Its just... the current API is a really terrible and convoluted way in which to do that.
Its bad at being a high level API which can manage things on its own in (theoretically) driver developer optimized (i.e. tune to the hardware) code, and its awkward at being a low level API in which I have to manage things myself.
OpenGL NG, please give us some of our sanity back.