I am currently developing an application that needs to record audio, encode it as AAC, stream it, and do the same in reverse - receiving stream, decoding AAC and playing audio.
I successfully recorded AAC (wrapped in a MP4 container) using the MediaRecorder, and successfully up-streamed audio using the AudioRecord class. But, I need to be able to encode the audio as I stream it, but none of these classes seem to help me do that.
I researched a bit, and found that most people that have this problem end up using a native library like ffmpeg.
But I was wondering, since Android already includes StageFright, that has native code that can do encoding and decoding (for example, AAC encoding and AAC decoding), is there a way to use this native code on my application? How can I do that?
It would be great if I only needed to implement some JNI classes with their native code. Plus, since it is an Android library, it would be no licensing problems whatever (correct me if I'm wrong).
yes, you can use libstagefright, it's very powerful.
Since stagefright is not exposed to NDK, so you will have to do extra work.
There are two ways:
(1) build your project using android full source tree. This way takes a few days to setup, once ready, it's very easy, and you can take full advantage of stagefright.
(2) you can just copy include file to your project, it's inside this folder:
android-4.0.4_r1.1/frameworks/base/include/media/stagefright
then you will have export the library function by dynamically loading libstagefright.so, and you can link with your jni project.
To encode/decode using statgefright, it's very straightforward, a few hundred of lines can will do.
I used stagefright to capture screenshots to create a video, which will be available in our Android VNC server, to be released soon.
the following is a snippet, I think it's better than using ffmpeg to encode a movie. You can add audio source as well.
class ImageSource : public MediaSource {
ImageSource(int width, int height, int colorFormat)
: mWidth(width),
mHeight(height),
mColorFormat(colorFormat)
{
}
virtual status_t read(
MediaBuffer **buffer, const MediaSource::ReadOptions *options) {
// here you can fill the buffer with your pixels
}
...
};
int width = 720;
int height = 480;
sp<MediaSource> img_source = new ImageSource(width, height, colorFormat);
sp<MetaData> enc_meta = new MetaData;
// enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_H263);
// enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_MPEG4);
enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC);
enc_meta->setInt32(kKeyWidth, width);
enc_meta->setInt32(kKeyHeight, height);
enc_meta->setInt32(kKeySampleRate, kFramerate);
enc_meta->setInt32(kKeyBitRate, kVideoBitRate);
enc_meta->setInt32(kKeyStride, width);
enc_meta->setInt32(kKeySliceHeight, height);
enc_meta->setInt32(kKeyIFramesInterval, kIFramesIntervalSec);
enc_meta->setInt32(kKeyColorFormat, colorFormat);
sp<MediaSource> encoder =
OMXCodec::Create(
client.interface(), enc_meta, true, image_source);
sp<MPEG4Writer> writer = new MPEG4Writer("/sdcard/screenshot.mp4");
writer->addSource(encoder);
// you can add an audio source here if you want to encode audio as well
//
//sp<MediaSource> audioEncoder =
// OMXCodec::Create(client.interface(), encMetaAudio, true, audioSource);
//writer->addSource(audioEncoder);
writer->setMaxFileDuration(kDurationUs);
CHECK_EQ(OK, writer->start());
while (!writer->reachedEOS()) {
fprintf(stderr, ".");
usleep(100000);
}
err = writer->stop();