views:

754

answers:

5

Hello.

I need to read sound stream sent by flash audio in my C++ application (C++ is not a real limitation, it may be C# or any other desktop language). Now flash app sends audio to another flash app but I need to receive the same audio by desktop application. So, is there a standard or best way how to do it?

Thank you for your answers.

A: 

You could try using the sound system from the Gnash project.

Mike Burrows
I think Gnash can read audio object from saved swf file. But I need to receive an audio stream sent by flash plugin to another flash plugin (using http or what it uses).
Oleg
+1  A: 

How is the sound actually sent? Via the network?

Edit: You'd be either capturing the audio from an HTTP stream, or an RTMP stream. Run Wireshark to find out, but I suspect you're doing something slightly shady...

voltagex
The initial idea was to connect flash app and C++ app and send sound in both directions. But it looks like there is no standard way how to do this.So, we are using new approach now: we use flash media server and RTMP to connect 2 clients. This approach works. But again - there is no standard way how to connect and play RTMP sound stream from flash media server if you don't use flash.I'm working on it now.
Oleg
A: 

So basically you want to connect to RTMP sound stream from flash media server from an arbitrary non-flash application? Have you taken a look at http://en.wikipedia.org/wiki/Real_Time_Messaging_Protocol ?

Dženan
Sure, I've seen this. Basically I already implemented C++ RTMP library and can connect to RTMP server and read sound stream. I was hoping to find ready solution for that but found no ready made solutions.So, I do it myself.
Oleg
http://rtmpdump.mplayerhq.hu/ is what you want
David Kemp
A: 

Unfortunately, Adobe IS relatively proprietary (hence the apple-adobe wars happening lately), but for several languages, there are projects to help out with RTMP.

WebOrb is commercial, for .NET, Java, PHP: http://www.themidnightcoders.com/products.html

FluorineFX is open source for .NET only: http://www.fluorinefx.com/

I haven't used either myself for RTMP, but I have used FluorineFX to connect to a flash remoting (AMF) gateway. I imagine it may do what you need for receiving the audio stream from a .NET-enabled client.

mattdekrey
I wrote my own RTMP client on C++. RTMP part was easiest one. Then I run into a problem of decoding audio stream and that took too much time and effort. Project got cancelled and we done the task using FMS and flash itself. That was much easier...
Oleg
A: 

Getting the frames, frame rate and other attributes of video clip If you have experience with writing applications in Microsoft DirectShow Editing Services (codename Dexter), this will sound very familiar to you. In the Windows environment, traditionally capturing still frames has been done using C++ and Dexter Type Library to access DirectShow COM objects. To do this in .NET Framework, you can make an Interop assembly of DexterLib which is listed under COM References in VS 2005. However it takes you a good amount of work to figure out how to convert your code from C++ to C# .NET. The problem occurs when you need to pass in a pointer reference as an argument to a native function, CLR does not directly support pointers as the memory position can change after each garbage collection cycle. You can find many articles on how to use DirectShow on the CodeProject or other places and we try to keep it simple. Here our goal is to convert a video file into an array of Bitmaps and I tried to keep this as short as possible, of course you can write your own code to get the Bitmaps out of a live stream and buffer them shortly before you send them.

Basically we have two option for using the DirectShow for converting our video file to frames in .NET:

Edit the Interop assembly and change the type references from pointer to C# .NET types. Use pointers with unsafe keyword. We chose the unsafe (read fast) method. It means that we extract our frames outside of .NET managed scope. It is important to mention that managed does not always mean better and unsafe does not really mean unsafe!

MediaDetClass mediaClass = new MediaDetClass(); _AMMediaType mediaType; ... //load the video file int outputStreams = mediaClass.OutputStreams; outFrameRate=0.0; for (int i = 0; i < outputStreams; i++) { mediaClass.CurrentStream = i; try{ //If it can the get the framerate, it's enough, //we accept the video file otherwise it throws an exception here outFrameRate = mediaClass.FrameRate; ....... //get the attributes here .....

 }catch 
{ // Not a valid meddia type? go to the next outputstream } 

} // No frame rate? if (outFrameRate==0.0) throw new NotSupportedException( " The program is unable" + " to read the video file."); // we have a framerate? move on... ... //Create an array to hold Bitmaps and intilize //other objects to store information...

unsafe { ... // create a byte pointer to store the BitmapBits
... while (currentStreamPos < endPosition) { mediaClass.GetBitmapBits(currentStreamPos, ref bufferSize, ref *ptrRefFramesBuffer, outClipSize.Width, outClipSize.Height); ...
//add frame Bitmap to the frameArray ... } } ...

Transfer extracted data over HTTP So far we have converted our video to an array of Bitmap frames. The next step is to transfer our frames over HTTP all the way to the client�s browser. It would be nice if we could just send our Bitmap bits down to the client but we cannot. HTTP is designed to transport text characters which mean your browser only reads characters that are defined in the HTML page character set. Anything else out of this encoding cannot be directly displayed.

To accomplish this step, we use Base64 encoding to convert our Bitmap to ASCII characters. Traditionally, Base64 encoding has been used to embed objects in emails. Almost all modern browsers including Gecko browsers, Opera, Safari, and KDE (not IE!) support data: URI scheme standard to display Base64 encoded images. Great! Now, we have our frames ready to be transferred over HTTP.

System.IO.MemoryStream memory = new System.IO.MemoryStream(); while (currentStreamPos < endPosition) { ... // Save the Bitmpas somewhere in the (managed) memory vdeoBitmaps.Save(memory, System.Drawing.Imaging.ImageFormat.Jpeg); //Convert it to Base64 strFrameArray[frameCount] = System.Convert.ToBase64String(memory.ToArray()); //Get ready for the next one memory.Seek(0, System.IO.SeekOrigin.Begin); } memory.Close(); ...

But we cannot just send out the encoded frames as a giant string. We create an XML document that holds our frames and other information about the video and then send it to the client. This way the browser can receive our frames as a DOM XML object and easily navigate through them. Just imagine how easy it is to edit a video that is stored in XML format:

14.9850224700412 {Width=160, Height=120} 6.4731334 /9j/4AAQSkZJRgABAQEAYAB.... ....

This format also has its own drawbacks. The videos that are converted to Base64 encoded XML files are somewhere between 10% (mostly AVI files) to 300 % or more (some WMV files) bigger than their binary equivalent.

If you are using an XML file, you even don't need a web server , you can open the HTML from a local directory and it should work! I included an executable in the article's download file that can convert your video file to XML document which later can be shown in the browser. However using big files and high resolution videos is not a good idea!

OK, now we can send out our �Base64 encoded video� XML document as we would do with any other type of XML files. Who says XML files always have to be boring record sets anyway