Random Phantasies

Update: GSOC

First of all, a few things actually got less comfortable.

That’s because I stopped acting like I have a scene Graph and now fully expose the render loop. I think this offers greater flexibility for early adopters.

That means instead of display I rather expose the function toopengl, which I intend to use to build up the scene graph later on.

Also I’m not fully happy with the API, but that is to be expected, as I haven’t figured out the scene graph and its interaction with React yet.

BUT, I have most of the pieces together now, to do the basic render operations, which you would expect from a 3D plotting package and most of the render attributes can be be time changing signals, which enables nice animation capabilities.

You can find examples with a few comments in GLPlot/examples

By the way, all shaders are interactive now by default.

That means you can just run the example, open a shader in a text editor, edit something, save -> et voilà =)

For example if you run example/surface.jl, GLPlot/src/shader/phongblinn.frag might be interesting.

Or volume.jl and GLPlot/src/shader/iso.frag.

 

Here are some screen-shots/videos:

Going through some iso-values:

Tiny cubes animated with different attributes:

2D geometry projected on z-value grid:

2dgeom

Same without seams (Surface Plot):

surf

Iso surface:

000002

Best,

Simon

Advertisements

Update

All updates are not on github yet, as I still need to put some work into this to expose it within an API.

API:

What I want to implement:

gldisplay(img::Union(Image{Real, Union(1,2,3)}, Texture))

Shows an Image as either a histogram for 1D image, Picture for 2D Image, volume rendering for 3D Picture

gldisplay(x::Range, y::Range,

attributes::Union(Image{Real,2},Texture, Array{Real, 2}…;

primitive=Triangle/Cube/Mesh)

This can be used for any grid based surface/histogram/bar graph rendering,but can also be extended to be used as a particle system.creates a grid x*y, where for every tuple (x,y) attributes will be read from the attributes at position (x,y), which will be applied to the primitive for every position in x*y.

The last API is a little bit difficult to implement, if I want to make it fully transparent and customizable for the user, as they are dependent on some shader.

But for now, I can have some predefined attributes like width, height, xpos, ypos, color and only expose them to the user.

Example renders are under the point Surfaces.

If  we advance the OpenCL/OpenGL interoperability, the Texture in attributes and image, can also be an OpenCL Image/Buffer, which would make it possible to render any results from OpenCL easily to the screen.


 

Volume:

Image

Added:

  • more reliable and faster rendering
  • correct voxel spacing
  • maybe skewed voxels (still needs to be tested)

To be implemented

  • Lighting
  • Slicing plane
  • Transfer function with 1D texture

 

Surface:

Instanced geometry (in this case a cube) rendering with arrays for attributes like position, height, color:

Image

Image

The data is fully controllable with React.jl:

This also works for surfaces and 3D textures, but is not implemented yet.

This approach only needs to upload one third of the data compared to my previous approach, but as a downside requires OpenGL 3.3.


 

Camera:

The camera is still horrible, but at least works now completely with React.

To be fixed:

  • Glitches, which come from previous values, because signal doesn’t start from zero for a new drag operation.
  • Camera flips, as I don’t calculate the rotation correctly

To be implemented:

  • panning
  • pick rotation axis

 

3D Picking:

I started with the groundwork for a general 3d picking framework, which will allow for some awesome selection API together with React.

This will allow for very precise information for every mouse position, like distance of object, group it belongs to, coordinates in some custom coordinate space.

Like that, precise Mesh/Particle/Volume/Text/UI-Element selection and editable object pivots just got one step closer!

Surfaces

Volume Rendering:

Image

Surface Rendering:

Image

 

I made some progress with rendering techniques, but mainly for surface rendering.

As you can see, the surface is finally rendered with Phong shading =)

You can find some more details concerning the future API on GLPlot.jl.

I’m getting closer to fully integrate React.jl, which makes some code a lot nicer, but there are also some parts which got more messy.

I hope that’s due to the fact, that I’m not fully used to “reactive thinking” yet.

Shashi is being very helpful and I’m positive, that integrating React.jl will make it a lot easier to interact with all the plotting parameters.

 

Furthermore, I cleaned up a few bugs and integrated an OpenGL debug callback. But debugging with OpenGL differs a little from graphic card to graphic card, so I still need to figure out a few things there.

I started looking into a custom framebuffer pipeline, which will hopefully enable me to do pretty cool things.

Rendering to off-screen buffers will help me with:

  • very general and fast 3D selection
  • better volume rendering with fast intersection rendering
  • a more fine grained control over re-rendering just parts of the display
  • saving plots as an image to the hdd

Saving images works quite nicely and I already set up a little demo with qjulia:

 

OpenGL OpenCL interop

Good news everyone!

Valentine and I got OpenGL, OpenCL data exchange going!

Example Repo: https://github.com/vchuravy/qjulia_gpu

Picture of the lovely, raytraced Julia-Set:

Image

React.jl && Volume Rendering

After some break and a nasty bug, that really ate away my patience, I present you volume rendering:

volumerendering

 

It’s still in an early phase, but I’m looking forward to incorporate transition functions and performance tweaks.

Also, I integrated React.jl into my code, which seems to work quite lovely.

It offers a little more functionality  and nicer design than my current event system, but it’s also a little slower.

It’ll be interesting to see, how the advantages will play out in my system architecture.

Best,

Simon

First progress

 

 

viewport

 

Camera is working quite alright now, though still far from pleasant.

If your graphic card has shader model >= 130 you can even try out the example.

Its in https://github.com/SimonDanisch/GLPlot.jl

If you don’t have my unregistered packages, there is packages.jl which you can execute to get all the needed packages.

Then go to GLPlot/src/ and execute gridshader.jl

Known bugs:

  • GLFW doesn’t build under osx, you’ve to get the binaries with for example home brew
  • Unix just works if you have cmake, libglu1-mesa-dev, xorg-dev installed
  • Shader don’t compile when you don’t have the right shader model (relatively easy to fix)
  • Camera flips at certain angles (quite easy to fix as well)
  • No lighting, as I still haven’t started to calculate the normals in a generic and future proof way.

Feel free to report bugs on Github.

GSOC — Starting now!

Project description:

This project is about writing volumetric, particle and surface visualizations entirely in Julia and OpenGL.

Rough time line:

  • Improve Camera
  • Insert axis and labels
  • Improve debugging of Shaders and OpenGL code
  • Make Shaders more interactive
  • Create Plot Api
  • Create different Example plots
  • Polish things and create cool shader

Along the way, I want to improve Julia’s OpenGL capabilities in general.