Owen Shepherd

@cmuratori Thats' it, we're done forever? Graphics are good enough for everything? I shouldn't ever want my GPU to be able to drive a VR headset at, say, 120Hz and push enough pixels to avoid the screen door effect?

A fixed ISA would be a death sentence for the GPU, because the options for increasing efficiency that exist on a CPU don't exist when you've got to fit hundreds of little cores on a chip, where groups of them are ganged together in lockstep, where as a result of that is that you're relying so much more on the compiler to understand the micro-architectural details so that you can get rid of so much complex logic that CPUs need.

The best you'd do is replace one IR (GLSL, DXSI, SPIR, PTX, whatever) with another, because in order to make things performant again the GPU vendors would recompile your code on the fly (on the GPU if necessary) except that IR would be designed as an ISA, which is a pretty poor choice for an IR because you throw away so much information which is useful to the optimizer.

Is GLSL terrible? Well, yeah. But that's like the rest of OpenGL, which is a combination of poor decisions and decisions which may have been great for the time and yet have proven to be wrong in the long term.

Owen Shepherd

@cmuratori @gunvulture @TimothyLottes @grumpygiant @tom_forsyth @Jonathan_Blow You want to ossify GPU design just as the onward march of silicon process improvements is slowing to a crawl? As innovative design becomes our last refuge in the search for better performance?

Owen Shepherd

Syntax highlighting

Owen Shepherd

@gpakosz My site is for full scale blogging as well when I get around to doing that. Do people click on the more link in general? I don't know. It seems that they do inside of long discussions, though.

Really, the main purpose of my site is two fold: firstly, content preservation: Twitter will probably be here tomorrow, but what about in ten years time? Secondly, finding things' I've said so I can link to them (Twitter search is less than stellar)

Owen Shepherd

@g_truc @matiasgoldberg @daniel_collin @aras_p Yeah, but your hammer is coming in a box labeled screwdriver.

Seriously, why is there even an option to discard and map a dynamic buffer if everyone is going to implement it as a pipeline stall? The whole point of such a call is to allow the implementation to either orphan or ping pong the backing buffer.

Having that API stall the pipeline is not something anybody ever wants. There is no reason whatsoever to use such an API. Either it shouldn't be there, or it should be fast. OpenGL lets me provide a hint that "Yeah, I'm going to modify this a lot" and then it shrugs its shoulders and ignores me.

OpenGL's API provides all sorts of high level looking objects: vertex buffers (of which I can bind multiple at once), textures, uniform buffers, etc. It provides convinient APIs for allocating multiples of these at once. The default usage of all of these, as implied by the documentation, is to allocate one texture object per texture, one vertex buffer object per vertex buffer, etc.

Nowhere does it suggest that actually if I do that then my processor is going to spend all of its time switching between them and my GPU is going to sit there looking bored. Nowhere does it say that, actually, you need to manually manage your memory.

Let us not forget that it only gave us the ability to properly manage our texture memory a year ago! (ARB_sparse_texture/GL4.3)

There is no *good* reason that
bind texture 0
draw
set texture 1
draw

should be so much slower than
bind texture 0
draw (using subtexture 0)
draw (using subtexture 1)

Pretty much the entire resason for that slowness is the complexity of the OpenGL specification mandated by ridiculous levels of backwards compatibility with designs which don't fit modern hardware.

So, actually, I'll admit that I did make an error in my initial statement: OpenGL is a high level graphics API. Its' just, its a pretty bad one, that you have to instead use as a portal to access the mercifully available low level graphics API which just so happens to be embedded within it.

If OpenGL is going to make me do all of my memory management myself, that's fine. I can live with that. Its just... the current API is a really terrible and convoluted way in which to do that.

Its bad at being a high level API which can manage things on its own in (theoretically) driver developer optimized (i.e. tune to the hardware) code, and its awkward at being a low level API in which I have to manage things myself.

OpenGL NG, please give us some of our sanity back.

Owen Shepherd

@g_truc @matiasgoldberg @daniel_collin @aras_p The API provides features like VBO discard on mapping and then doesn't tell you that actually if you use this feature your performance will go down the toilet.

It provides features like the ability to make multiple vertex buffers and then makes switching between them inordinately expensive (you know its inordinately expensive because you can bounce around within a single vertex buffer just fine manually - there is no truly good reason that the driver couldn't do this under the hood except the OpenGL state machine is massively complex and prevents such optimizations)

There are lots of areas of the OpenGL API where it provides a feature and then shoots you in the foot when you try to use it, because that feature isn't performant on pretty much any hardware. Quite often you can roll this feature by hand and get better performance.

I get it, the driver vendors have a hard time because of all the corner cases. Of course, this itself is an admission that the spec is broken, because the spec is what is making it hard.

The OpenGL spec is a wasteland of features not to use. Its' sad and tragic. It shouldn't be that way.

Maybe "OpenGL NG" will be what we need 5 years too late (not the lower level API bit - that's great and all; just an API which isn't full of detritus which robs the driver developer of optimization capability)

Owen Shepherd

@g_truc @matiasgoldberg @daniel_collin @aras_p The problem with OpenGL is that it claims to be a high level API, and to all appearances is, yet if you try to use it like the API structure suggests you get terrible performance characteristics. To get good performance from GL (and it can be truly brilliant performance) requires using it as a really quite awkward low level API. The whole situation is asinine.

I mean, things like AZDO and the (un)synchronized buffer mapping APIs are great, but the fact that you have to go to such extents to get great performance when the API provides a whole lot of high-level appearing functionality is absolutely bonkers.

OpenGL's current state is really quite indefensible.

Owen Shepherd

Replied to a post on werd.io :

@benwerd SIL OFL is pretty standard for fonts. Very liberal, do what you want, just mandatory renaming if you make a derivative. Only slightly tricky case is if you're building your own WOFF or such, in which case your WOFF needs to bear a different name (derivatives clause)

Owen Shepherd

Replied to a post on werd.io :

@benwerd I did that too last time my MacBook needed service. I was somewhat disappointed at the lack of looks of disapproval...

Owen Shepherd

Replied to a post on werd.io :

@benwerd Nowhere, I expect. They discontinued production and discounted them a few months before N5 release, so I imagine all you'll be able to find are second hand ones.