Jump to content

Using C#s System.Drawing and converting to a Unigine.Image


photo

Recommended Posts

Hi All,

I'm working on Streaming a Camera feed into Unigine, and using C#

Currently, my camera feeds its information in as a System.Drawing Bitmap file, and i'm wondering how best to convert this to a Unigine Image to display on a WidgetSprite.

I have attempted to stream it into a DirectX Texture using the SharpDX libraries, however i have had no luck in doing so thus far.

 

Any suggestions would be appreciated, and if anyone has done something similar, their input would be very informative.

 

Thanks,

OffPlanet

Link to comment

I know that I've been Posting a lot about this topic, but will be a core feature of our product and making sure it is reliable and efficient is important to us.

I have attempted Writing this bitmap to a file and having Unigine load it from a file, however for just one eye the loading process takes 211 ms, where as we need this to run at 90fps preferably, which is approximately 11 ms.

Hopefully someone has done similar work and has some insight,

Thanks,

OffPlanet

Link to comment

UPDATE:

I have made a very basic implementation of this, which takes the bitmaps pixels and cycles through them to remap them to the Unigine format, however this process takes approximately 715ms for one eye at 1280x800

I'm sure there is a way to optimize this, so any input would be appreciated

 

Thanks,

OffPlanet

Link to comment

Hi Lukas,

Actually, I don't think C# is the best choice for this task.
It's pretty hard to work with the raw data using C#, and some optimized low-level processing may speed up your application considerably.

Anyway, let's review the basics.
What interface your camera provides for reading data from it? (I've never worked with any of them, so I have no idea what it might be)
Is there a C++ counterpart of this interface?
Is System.Drawing.Bitmap necessary at all as an intermediary?

The best scenario I can imagine is when we able to stream the data from the camera right into the video memory.
Or at least right into the Unigine.Image instance.
With luck, without any pixel processing :)

Link to comment

Hi Ded,

Our camera provides a C# and a C++ interface, however the dlls are native and it recommends using the c# interface. The interface itself outputs into the System.drawing bitmap, using the PixelFormat.Format24bppRgb format.

From what i can tell it does its image processing in the source code and outputs this format, which works in Windows forms applications, and unity. However, since this format doesn't appear to be supported in Unigine, i am having some trouble coming up with a solution.

Most of my experience is in C# when it comes to programming, so using c++ would be last resort.

Hope you can help with this,

Thanks,

OffPlanet.

Link to comment

I've gone through some research to understand the problem.

I have found that when we access bitmap data as byte array we get flipped BGR pixels even though the format is named Format24bppRgb :)
So indeed, some processing is required.
Assuming that this processing takes the most of the time, we could improve things by moving it on the GPU.
Please, check out the example of a possible solution:

// File: AppWorldLogic.cs

using System.Drawing;
using System.Drawing.Imaging;
using Unigine;

namespace UnigineApp
{
	class AppWorldLogic : WorldLogic
	{
		private Bitmap _bitmap;

		private WidgetSprite _sprite = null;

		private Blob _blob = null;
		private Unigine.Image _image = null;

		Material _flip_material = null;

		private TextureRender _textureRender;
		private Texture _texture = null;
		private Texture _result = null;

		public AppWorldLogic()
		{
		}

		public override int init()
		{
			_bitmap = new Bitmap(@"D:\test.jpg");

			Gui gui = Gui.get();
			_sprite = new WidgetSprite(gui);
			gui.addChild(_sprite);

			_blob = new Blob();

			_image = new Unigine.Image();
			_image.create2D(_bitmap.Width, _bitmap.Height, Unigine.Image.FORMAT_RGBA8);

			_flip_material = Materials.get().findMaterial("flip_gdi_pixel_post");
			
			return 1;
		}

		public override int shutdown()
		{
			_bitmap.Dispose();
			_bitmap = null;

			_sprite.clearPtr();
			_sprite = null;

			_blob.clearPtr();
			_blob = null;

			_image.clearPtr();
			_image = null;

			DestroyVideoResources();

			return 1;
		}

		public override int update()
		{
			Rectangle rect = new Rectangle(0, 0, _bitmap.Width, _bitmap.Height);

			// Note that we acquire the lock for 32-bit pixel format.
			// Graphics API doesn't support 24-bit pixels, 
			// so we have to perform format conversion at some point anyway.
			BitmapData data = _bitmap.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppRgb);

			_blob.seekSet(0U);
			uint nBytes = (uint)(System.Math.Abs(data.Stride) * _bitmap.Height);
			_blob.write(data.Scan0, nBytes);

			_bitmap.UnlockBits(data);

			_image.setPixels(_blob);

			InitVideoResources();

			_texture.setImage(_image);

			RenderState renderState = RenderState.get();
			renderState.saveState();
			renderState.clearStates();
			_textureRender.setColorTexture(0, _result);
			_textureRender.enable();
			Render.get().renderPostMaterial(_flip_material, _texture);
			_textureRender.disable();
			_textureRender.unbindColorTexture();
			renderState.restoreState();

			return 1;
		}

		public override int destroy()
		{
			DestroyVideoResources();
			return 1;
		}

		private void InitVideoResources()
		{
			if (_textureRender == null)
			{
				_textureRender = new TextureRender();
				_textureRender.create2D(_image.getWidth(), _image.getHeight());
			}

			if (_texture == null)
			{
				_texture = Texture.create();
				_texture.create2D(_image.getWidth(), _image.getHeight(), Texture.FORMAT_RGBA8, Texture.FILTER_POINT);
			}

			if (_result == null)
			{
				_result = Texture.create();
				_result.create2D(_image.getWidth(), _image.getHeight(), Texture.FORMAT_RGBA8, Texture.USAGE_RENDER);
				_sprite.setRender(_result);
			}
		}

		private void DestroyVideoResources()
		{
			if (_textureRender != null)
			{
				_textureRender.clearPtr();
				_textureRender = null;
			}

			if (_texture != null)
			{
				_texture.clearPtr();
				_texture = null;
			}

			if (_result != null)
			{
				_result.clearPtr();
				_result = null;
			}
		}
	}
}
<!-- File: data/materials/flip_gdi_pixel_post.basemat -->
<!-- This is a new format of base material files in SDK 2.6. -->
<!-- For earlier SDK versions change the extension to ".mat" and "base_material" tag to "material"  -->
<base_material version="2.5.0.1" name="flip_gdi_pixel_post" hidden="1">
	<texture name="color" unit="0" pass="post" type="procedural"/>
	<shader pass="post" fragment="shaders/flip_gdi_pixel_post.shader"/>
</base_material>
// File: data/shaders/flip_gdi_pixel_post.shader
#include <core/shaders/common/fragment.h>

INIT_TEXTURE(0, TEX_GDI)

MAIN_BEGIN(FRAGMENT_OUT, FRAGMENT_IN)
	
	float4 gdi_color = TEXTURE_BIAS_ZERO(TEX_GDI, IN_UV);
	
	OUT_COLOR.rgb = float3(gdi_color.b, gdi_color.g, gdi_color.r);
	OUT_COLOR.a = 1.0f;
	
MAIN_END

It works pretty well for me, but I use the static bitmap, as you can see.

It's possible that the bottleneck is in another place, though.
For example, we may find out that acquiring a lock for the bitmap pixels blocks the update for a long because the camera acquires locks for writing too.

If GPU processing won't help, please profile the update() function thoroughly to locate the exact place that should be optimized.

Don't forget to inform us about any progress with it! :)

Link to comment
×
×
  • Create New...