Valerian Merkling | 26 Jul 15:56 2016
Picon

Looking for idea about predictive tile loading.

Hi,

I'm working on a 3D SIG powered with OpenSceneGraph. 

I'm planning to use a QuadTree full of paged LOD to display my scene.

I'm ok with the detection of which tile is missing and try to load it, but I would like to be more "predictive",
like guessing which not yet loaded tiles should soon need to be loaded and then start to work right now. I'm
not looking for code or tools, but theorical stuff about it.

I know it's a common problem, and I guess there is tons of things about it on internet. Problem is that i lack
the precise english vocabulary to be able to search for it. 

So if anyone could give me the commonly used name for this kind of problem, it would be really helpful ! But any
link to documentation is also welcome.

Thank you!

Cheers,
Valerian

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68208#68208

Dima Trepnau | 26 Jul 10:36 2016
Picon

Problems implementing Cascaded Shadow Maps

Hi all,

I'm currently trying to implement cascading shadow maps into an old (OSG 2.81) Simulation Project of
another Student. I already managed to create the correct shadow maps, but it seems I have a few strange
Problems with uploading the textures to the shaders and transforming the Vertices into shadow space.

1. Problem:
When I upload the shadow maps to the shaders I always end up with values clamped to [0,64]. I already tried
different internal formats, Pixel formats and Pixel types.
I also seem to be unable to use uniform Arrays. 

2. Problem:
To transform a vertex into shadow projection space I just need to multiply it by the modelViewProjection
Matrix of the "sun" and do the perspective division right? 

The shadow maps generated by this cams definitely look correct, however I get strange results if I try to use
the projection matrix myself.

I use a uniform object to upload the modelViewProjection Matrix of the first sun to the shader by using

Code:

stateset->addUniform(new osg::Uniform(osg::Uniform::FLOAT_MAT4 , "shadowMat1"));
stateset->getUniform("shadowMat1")->set(sunView->getCamera()->getViewMatrix() *
sunView->getCamera()->getProjectionMatrix()) 

First I tries to use the shaders to color the vertices differently for each frustum split, but everthing had
the same color.  
I then used a vertex that clearly is inside the Clip space of the first sun to test it.
When I transform a vertex into the view space by using the view matrix, I get the expected result, but after
(Continue reading)

Tony Vasile | 26 Jul 05:43 2016
Picon

CompositeViewer and FBO

Basically my setup is similar to osgfpdepth but instead of a simple osgViewer::Viewer object I have a
osgViewer::CompositeViewer on the top.
Currently I only have one view in my CompositeViewer. 

After I add the rttCamera as a slave to the View returned by viewer.getView(0) and then add the texCamera to
the same view,   the cameras that are processed by osgUtil::RenderStage::runCameraSetup are the main
camera created in the view and the texCamera. If I debug the osgfpdepth program with its simple
osgViewer::Viewer and its rttCamera and its texCamera, I see both the rttCamera and the texCamera
processed by osgUtil::RenderStage::runCameraSetup. 

Does anyone have any suggestions as to were to look? I want the behaviour as the osgfpdepth example. Thanks
in advance.

------------------------
Tony V

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68206#68206

李真能 | 25 Jul 10:09 2016
Picon

osgviewer can't parse .3ds format correctly

Hi,

 I download a nonosuit model, when i load this model useing aasimp and render it with opengl, it's correct(the program model_diffuse), and all texture are loaded, but if i render model with osgviewer(osgviewer nanosuit2.3ds), some texture are lost, I think the 3DS plugin of osg is not compatibility with this nanosuit model, you can download model with this link:
    https://pan.baidu.com/s/1gf4psYn
    The file nanosuit2-3ds.osg is converted by osgconv nanosuit2.3ds nanosuit2-3ds.osg, and I delete some property: rendering_hint, renderBinMode, binNumber, binName, GL_CULL_FACE, GL_BLEND_ON, after this ,some textures are still not loaded correctly, such as right leg and right body.

<div>
<p>Hi,<br></p>
<div><div>&nbsp;I download a nonosuit model, when i load this model useing aasimp and render it with opengl, it's correct(the program model_diffuse), and all texture are loaded, but if i render model with osgviewer(osgviewer nanosuit2.3ds), some texture are lost, I think the 3DS plugin of osg is not compatibility&#12288;with this nanosuit model, you can download model with this link:<br>&nbsp;&nbsp;&nbsp; https://pan.baidu.com/s/1gf4psYn<br>&nbsp; &nbsp; The file nanosuit2-3ds.osg is converted by osgconv nanosuit2.3ds nanosuit2-3ds.osg, and I delete some property: rendering_hint, renderBinMode, binNumber, binName, GL_CULL_FACE, GL_BLEND_ON, after this ,some textures are still not loaded correctly, such as right leg and right body.<br><br><div></div>
</div></div>
</div>
Josiah Jideani | 22 Jul 13:00 2016
Picon

3D osg::Image allocation size problem

Hi,

I am developing a scientific visualization application using Qt and Openscenegraph.  I am trying to create
a 3D osg::Image to add to an osgVolume.  I am having problems allocating the image data when I call the
allocateImage member function (see the code snippet below).

The allocation works for equal dimensions less than 640.

When I try to allocate anything above 640x640x640 but less than 800x800x800, it seems to allocate
successfully because image_s, image_t and image_r hold the correct sizes however when I try to write to
the image data (the nested for loops) a segmentation fault is thrown at data[0] = 0.0f when s = 0; t = 0; and r =
some random but valid number.

I can allocate and write to the image data with sizes between 800x800x800 and 1024x1024x1024, but a
segmentation fault is thrown from the object code after the call to the viewer's frame() method.

And finally for sizes above 1024 the allocation completely fails as image_s image_t and image_r all hold 0.

Any clue on how to solve this? It was my understanding that the maximum size of the image is limited by the
maximum 3D texture size of the graphics card which for the Quadro K4200 that I'm using is 4096x4096x4096. 
So why am I only able to allocate a 640x640x640 image?

These are the specifications of my system:
Operating system: Opensuse Leap 42.1
RAM: 128GB
Graphics Card: Quadro K4200
Qt: Qt 4.7.1
OSG version: 3.2.3

Thank you!

Cheers,
Josiah

Code:

osg::ref_ptr<osg::Image> image = new osg::Image
image->allocateImage(1024, 1024, 1024, GL_RGBA, GL_FLOAT);

int image_s = image->s();
int image_t = image->t();
int image_r = image->r();

for(int s = 0; s < image_s; s++)
{
    for(int t = 0; t < image_t; t++)
    {
        for(int r = 0; r < image_r; r++)
        {
            float* data = (float*) image->data(s,t,r);
            data[0] = 0.0f;
            data[1] = 0.0f;
            data[2] = 1.0f;
            data[3] = 0.1f;
        }
    }
}

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68195#68195

Christian Buchner | 22 Jul 14:48 2016
Picon

floating point pbuffers - not supported by current PixelBufferWin32 implementation

Hi all,

I spent the last 3 hours trying to coerce OSG to give me a floating point pbuffer. Just setting the required bits for color components to 32 bits in the graphicscontext traits isn't working.

Turns out, on nVidia cards you also have to give the WGL_FLOAT_COMPONENTS_NV flag as "true" to get a valid pixel format on Windows. The following code does this:

    std::vector<int> fAttribList;

    fAttribList.push_back(WGL_SUPPORT_OPENGL_ARB);
    fAttribList.push_back(true);
    fAttribList.push_back(WGL_PIXEL_TYPE_ARB);
    fAttribList.push_back(WGL_TYPE_RGBA_ARB);

    fAttribList.push_back(WGL_RED_BITS_ARB);
    fAttribList.push_back(32);
    fAttribList.push_back(WGL_GREEN_BITS_ARB);
    fAttribList.push_back(32);
    fAttribList.push_back(WGL_BLUE_BITS_ARB);
    fAttribList.push_back(32);
    fAttribList.push_back(WGL_ALPHA_BITS_ARB);
    fAttribList.push_back(32);
    fAttribList.push_back(WGL_STENCIL_BITS_ARB);
    fAttribList.push_back(8);
    fAttribList.push_back(WGL_DEPTH_BITS_ARB);
    fAttribList.push_back(24);
    fAttribList.push_back(WGL_FLOAT_COMPONENTS_NV);
    fAttribList.push_back(true);
    fAttribList.push_back(WGL_DRAW_TO_PBUFFER_ARB);
    fAttribList.push_back(true);
    fAttribList.push_back(WGL_DOUBLE_BUFFER_ARB);
    fAttribList.push_back(false);

    fAttribList.push_back(0);

    unsigned int nformats = 0;
    int format;
    WGLExtensions* wgle = WGLExtensions::instance();
    wgle->wglChoosePixelFormatARB(hdc, &fAttribList[0], NULL, 1, &format, &nformats);
    std::cout << "Suitable pixel formats: " << nformats << std::endl;

On my GTX 970 card here this returns exactly one suitable pixel format (3 if you drop the DOUBLE_BUFFER_ARB requirement even)..

It seems that the implementation of PixelBufferWin32 cannot currently be given any user-defined attributes to the wglChoosePixelFormatARB function. Is this a capability that we should consider adding? Or should we automatically sneak in this vendor specific flag if the color components the traits specify have 32 bits and a previous call to wglChoosePixelFormatARB returned 0 matches?

I am leaving this up for debate.

Is there a vendor-neutral alternative to the WGL_FLOAT_COMPONENTS_NV flag?

For now, I can simply patch my local copy of the OSG libraries to support floating point pbuffers on nVidia cards.

Christian

<div><div dir="ltr">
<div>
<div>Hi all,<br><br>
</div>I spent the last 3 hours trying to coerce OSG to give me a floating point pbuffer. Just setting the required bits for color components to 32 bits in the graphicscontext traits isn't working.<br><br>
</div>Turns out, on nVidia cards you also have to give the WGL_FLOAT_COMPONENTS_NV flag as "true" to get a valid pixel format on Windows. The following code does this:<br><div>
<br>&nbsp;&nbsp;&nbsp; std::vector&lt;int&gt; fAttribList;<br><br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_SUPPORT_OPENGL_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(true);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_PIXEL_TYPE_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_TYPE_RGBA_ARB);<br><br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_RED_BITS_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(32);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_GREEN_BITS_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(32);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_BLUE_BITS_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(32);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_ALPHA_BITS_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(32);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_STENCIL_BITS_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(8);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_DEPTH_BITS_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(24);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_FLOAT_COMPONENTS_NV);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(true);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_DRAW_TO_PBUFFER_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(true);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(WGL_DOUBLE_BUFFER_ARB);<br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(false);<br><br>&nbsp;&nbsp;&nbsp; fAttribList.push_back(0);<br><br>&nbsp;&nbsp;&nbsp; unsigned int nformats = 0;<br>&nbsp;&nbsp;&nbsp; int format;<br>&nbsp;&nbsp;&nbsp; WGLExtensions* wgle = WGLExtensions::instance();<br>&nbsp;&nbsp;&nbsp; wgle-&gt;wglChoosePixelFormatARB(hdc, &amp;fAttribList[0], NULL, 1, &amp;format, &amp;nformats);<br>&nbsp;&nbsp;&nbsp; std::cout &lt;&lt; "Suitable pixel formats: " &lt;&lt; nformats &lt;&lt; std::endl;<br><br>
</div>
<div>On my GTX 970 card here this returns exactly one suitable pixel format (3 if you drop the DOUBLE_BUFFER_ARB requirement even)..<br><br>
</div>
<div>It seems that the implementation of PixelBufferWin32 cannot currently be given any user-defined attributes to the wglChoosePixelFormatARB function. Is this a capability that we should consider adding? Or should we automatically sneak in this vendor specific flag if the color components the traits specify have 32 bits and a previous call to wglChoosePixelFormatARB returned 0 matches?<br><br>
</div>
<div>I am leaving this up for debate.<br><br>
</div>
<div>Is there a vendor-neutral alternative to the WGL_FLOAT_COMPONENTS_NV flag?<br>
</div>
<div>
<br>For now, I can simply patch my local copy of the OSG libraries to support floating point pbuffers on nVidia cards.<br>
</div>
<div><br></div>
<div>Christian<br><br>
</div>
</div></div>
Daniel Neos | 21 Jul 17:42 2016
Picon

Improvement of Arcball Camera Handling

Hi,

I want to improve my CameraHandling since it acts a little bit odd if I am too far in the center of the scene,
which is the lookat-point of the camera.

The camera is rotates around a specific point(mid of the boundingsphere center), always looking at it.  

But if I am getting to close to the midpoint, sometimes I lose the focus point or the camera rotates very fast.
My code is rather simple, maybe there is a caveat which I am not aware of.

Also the distance changes, if I am rotating around the same axis everytime, I get close or far away.

Code:

void OsgWidgetEventHandler::rotateOrbitCamera(double angle, const osg::Vec3d& axis,
osgViewer::View* viewer)
{
    const osg::Matrixd rotation = osg::Matrix::rotate(angle, axis);
    osg::Matrixd preTrans = osg::Matrix::identity();
    osg::Matrixd postTrans = osg::Matrix::identity();

    const osg::Vec3 translation = viewer->getCamera()->getViewMatrix().getTrans();
    m_rotationPoint = translation - m_boundingSphereCenter;

    preTrans.setTrans(-m_rotationPoint);
    postTrans.setTrans(m_rotationPoint);
    osg::Matrixd viewMatrix = viewer->getCamera()->getViewMatrix();
    viewMatrix = viewMatrix * (preTrans * rotation *  postTrans);
    viewer->getCamera()->setViewMatrix(viewMatrix);

    osg::Vec3d eye, center, up;
    viewer->getCamera()->getViewMatrixAsLookAt(eye, center, up);
    if (m_focusPointFlag)
    {
        viewer->getCamera()->setViewMatrixAsLookAt(eye, m_boundingSphereCenter, up);
    }
}

The m_focusPointFlag is only false when the camera was translated(not zoomed), so that the lookat point
can be changed.

How can I achieve a smooth arcball camera behaviour?

Thank you!

Cheers,
Daniel

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68194#68194

Zach Cregan | 21 Jul 07:04 2016

[build] Building OSG for Android on Windows

Hi,

I'm working on a mobile app which displays interactive 3D indoor maps. I mainly work on iOS and have been
using Apple's SceneKit with good results. On Android we've been using a 3D library called JPCT, but the
results aren't quite what we were hoping for. JPCT has issues rendering some of our models and the
performance is nowhere near what we're seeing on iOS with SceneKit (and that's on Android devices with
significantly more power than the iPhones we're testing on). I'm really interested in trying out
OpenSceneGraph as a replacement for JPCT. I'm trying to setup a barebones Android application using
OpenSceneGraph to load an OBJ and MTL file from disk with some basic camera controls.

I can't get OSG compiled for Android though. I've been following the instructions from the OpenSceneGraph
website > "Documentation" > "Platform Specifics" > "Android" > "Building OpenSceneGraph for Android
[3.3.x - 3.4]". I'm on Windows, I've cloned the 3.4 branch to my system, downloaded the latest Android NDK,
installed the latest version of cmake, and then ran this command: 
> cmake .. -DANDROID_NDK=C:\Users\Zachary\Desktop\android-ndk-r12b
-DCMAKE_TOOLCHAIN_FILE=../PlatformSpecifics/Android/android.toolchain.cmake
-DOPENGL_PROFILE="GLES2" -DDYNAMIC_OPENTHREADS=OFF -DDYNAMIC_OPENSCENEGRAPH=OFF
-DANDROID_NATIVE_API_LEVEL=15 -DANDROID_ABI=armeabi -DCMAKE_INSTALL_PREFIX=C:\Users\Zachary\Desktop\OpenSceneGraph
 But I'm getting this error: 
> CMake Error at CMakeLists.txt:52 (PROJECT): CMAKE_SYSTEM_NAME is 'Android' but 'NVIDIA Nsight Tegra
Visual Studio Edition' is not installed.

Do I actually need to get this NVIDIA Nsight Tegra Visual Studio Edition? I haven't seen that dependency
mentioned anywhere on the OSG website, forum, or through any Googling so I thought I must be doing
something wrong. I appreciate any help. I have some experience setting up builds for iOS with Xcode, but
this NDK stuff is beyond me.

Thank you!
Zach

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68186#68186

duc nguyen | 20 Jul 18:20 2016
Picon

[osgPlugins] Cannot use freetype for openscenegraph example

Hi,

First I build OpenSceneGraph from source and run the example_osgViewerIPhone and it works. Then I tried to
integrate freetype plugin to this target.

I build osgdb_freetype target from source after config include and library paths of freetype I downloaded
from openframeworks as the README.txt of OSG suggested. The result it make a lib file libOpenThreadsd.a
in my lib folder. Then I add this lib to project from Build Phases and build again but many errors occur below.

I also try manual build FreeType for iOS Simulator from source but the result is same.

Addition Info: I build OSG for iphone Simulator so all config of Architectures is same i386. OSG Version
3.2.3. CMake ver 3.2.0. Xcode 6.1. MacOS 10.9.4. iOS 8.1

So how can I make freetype plugin use for OSG project correctly, or how to build freetype for iOS Simulator?
Please give me some suggestions, many thanks.

This is my error log:

Code:
Undefined symbols for architecture i386:
  "_FT_Done_Face", referenced from:
      FreeTypeFont::~FreeTypeFont() in libosgdb_freetyped.a(FreeTypeFont.o)
  "_FT_Done_FreeType", referenced from:
      FreeTypeLibrary::~FreeTypeLibrary() in libosgdb_freetyped.a(FreeTypeLibrary.o)
  "_FT_Get_Char_Index", referenced from:
      FreeTypeFont::getKerning(unsigned int, unsigned int, osgText::KerningType) in libosgdb_freetyped.a(FreeTypeFont.o)
  "_FT_Get_Kerning", referenced from:
      FreeTypeFont::getKerning(unsigned int, unsigned int, osgText::KerningType) in libosgdb_freetyped.a(FreeTypeFont.o)
  "_FT_Init_FreeType", referenced from:
      FreeTypeLibrary::FreeTypeLibrary() in libosgdb_freetyped.a(FreeTypeLibrary.o)
  "_FT_Load_Char", referenced from:
      FreeTypeFont::getGlyph(std::pair<unsigned int, unsigned int> const&, unsigned int) in libosgdb_freetyped.a(FreeTypeFont.o)
      FreeTypeFont::getGlyph3D(unsigned int) in libosgdb_freetyped.a(FreeTypeFont.o)
  "_FT_New_Face", referenced from:
      FreeTypeLibrary::getFace(std::string const&, unsigned int, FT_FaceRec_*&) in libosgdb_freetyped.a(FreeTypeLibrary.o)
  "_FT_Open_Face", referenced from:
      FreeTypeLibrary::getFace(std::istream&, unsigned int, FT_FaceRec_*&) in libosgdb_freetyped.a(FreeTypeLibrary.o)
  "_FT_Outline_Decompose", referenced from:
      FreeTypeFont::getGlyph3D(unsigned int) in libosgdb_freetyped.a(FreeTypeFont.o)
  "_FT_Outline_Get_BBox", referenced from:
      FreeTypeFont::getGlyph3D(unsigned int) in libosgdb_freetyped.a(FreeTypeFont.o)
  "_FT_Set_Charmap", referenced from:
      FreeTypeLibrary::verifyCharacterMap(FT_FaceRec_*) in libosgdb_freetyped.a(FreeTypeLibrary.o)
  "_FT_Set_Pixel_Sizes", referenced from:
      FreeTypeFont::init() in libosgdb_freetyped.a(FreeTypeFont.o)
      FreeTypeFont::setFontResolution(std::pair<unsigned int, unsigned int> const&) in libosgdb_freetyped.a(FreeTypeFont.o)
ld: symbol(s) not found for architecture i386
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Thank you!

Cheers,
duc

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68183#68183

Ravi Mathur | 21 Jul 07:56 2016
Picon

Re: [build] OSX X11 Build System Failures

Fix submitted in Submissions Post (http://forum.openscenegraph.org/viewtopic.php?p=68188).

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68190#68190

Bruno Oliveira | 18 Jul 20:13 2016
Picon

how to render 16bit depth image

Hello,

I have a 16bit per channel, 4 channel, image. The format is short (uint16_t). Note that this image is scaled to the full 16bit depth [0, 65536]


I have this code that works well for a 8bit, 4 channel image:

osg::ref_ptr<osg::Image> image = new osg::Image();
 image->setImage(ImageSize, ImageSize,1,
GL_RGBA8,
GL_RGBA,
GL_UNSIGNED_INT_8_8_8_8_REV,
MyDataPtr,
osg::Image::NO_DELETE);



Now I want to por this to 16bit depth.
How is this done?

I tried

 image->setImage(ImageSize, ImageSize,1,
GL_RGBA16UI,
GL_RGBA,
GL_UNSIGNED_INT_8_8_8_8_REV,
MyDataPtr,
osg::Image::NO_DELETE);

But this yields all black textures. How can I do this without converting my 16bit image pixel data?
<div><div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>Hello,<br><br>
</div>I have a 16bit per channel, 4 channel, image. The format is short (uint16_t). Note that this image is scaled to the full 16bit depth [0, 65536]<br><br><br>
</div>I have this code that works well for a 8bit, 4 channel image:<br><br>osg::ref_ptr&lt;osg::Image&gt; image = new osg::Image(); <br>&nbsp;image-&gt;setImage(ImageSize, ImageSize,1,<br>GL_RGBA8,<br>GL_RGBA,<br>GL_UNSIGNED_INT_8_8_8_8_REV, <br>
</div>MyDataPtr,<br>osg::Image::NO_DELETE);<br><br><br><br>
</div>Now I want to por this to 16bit depth.<br>
</div>How is this done?<br><br>
</div>I tried<br><br>&nbsp;image-&gt;setImage(ImageSize, ImageSize,1,<br>GL_RGBA16UI,<br>GL_RGBA,<br>GL_UNSIGNED_INT_8_8_8_8_REV, <br>MyDataPtr,<br>osg::Image::NO_DELETE);<br><br>
</div>But this yields all black textures. How can I do this without converting my 16bit image pixel data?<br>
</div></div>

Gmane