I want to create an android application which can locate a video file (which is more than 300 mb) and compress it to lower size mp4 file.
i already tried to do it with this
This tutorial is a very effective since you 're compressing a small size video (below than 100 mb)
So i tried to implement it using JNI .
i managed to build ffmpeg using this
But currently what I want to do is to compress videos . I don't have very good knowledge on JNI. But i tried to understand it using following link
If some one can guide me the steps to compress video after open file it using JNI that whould really great , thanks
Assuming you've got the String path of the input file, we can accomplish your task fairly easily. I'll assume you have an understanding of the NDK basics: How to connect a native .c file to native
methods in a corresponding .java file (Let me know if that's part of your question). Instead I'll focus on how to use FFmpeg within the context of Android / JNI.
High-Level Overview:
#include <jni.h>
#include <android/log.h>
#include <string.h>
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#define LOG_TAG "FFmpegWrapper"
#define LOGI(...) __android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)
void Java_com_example_yourapp_yourJavaClass_compressFile(JNIEnv *env, jobject obj, jstring jInputPath, jstring jInputFormat, jstring jOutputPath, jstring JOutputFormat){
// One-time FFmpeg initialization
av_register_all();
avformat_network_init();
avcodec_register_all();
const char* inputPath = (*env)->GetStringUTFChars(env, jInputPath, NULL);
const char* outputPath = (*env)->GetStringUTFChars(env, jOutputPath, NULL);
// format names are hints. See available options on your host machine via $ ffmpeg -formats
const char* inputFormat = (*env)->GetStringUTFChars(env, jInputFormat, NULL);
const char* outputFormat = (*env)->GetStringUTFChars(env, jOutputFormat, NULL);
AVFormatContext *outputFormatContext = avFormatContextForOutputPath(outputPath, outputFormat);
AVFormatContext *inputFormatContext = avFormatContextForInputPath(inputPath, inputFormat /* not necessary since file can be inspected */);
copyAVFormatContext(&outputFormatContext, &inputFormatContext);
// Modify outputFormatContext->codec parameters per your liking
// See http://ffmpeg.org/doxygen/trunk/structAVCodecContext.html
int result = openFileForWriting(outputFormatContext, outputPath);
if(result < 0){
LOGE("openFileForWriting error: %d", result);
}
writeFileHeader(outputFormatContext);
// Copy input to output frame by frame
AVPacket *inputPacket;
inputPacket = av_malloc(sizeof(AVPacket));
int continueRecording = 1;
int avReadResult = 0;
int writeFrameResult = 0;
int frameCount = 0;
while(continueRecording == 1){
avReadResult = av_read_frame(inputFormatContext, inputPacket);
frameCount++;
if(avReadResult != 0){
if (avReadResult != AVERROR_EOF) {
LOGE("av_read_frame error: %s", stringForAVErrorNumber(avReadResult));
}else{
LOGI("End of input file");
}
continueRecording = 0;
}
AVStream *outStream = outputFormatContext->streams[inputPacket->stream_index];
writeFrameResult = av_interleaved_write_frame(outputFormatContext, inputPacket);
if(writeFrameResult < 0){
LOGE("av_interleaved_write_frame error: %s", stringForAVErrorNumber(avReadResult));
}
}
// Finalize the output file
int writeTrailerResult = writeFileTrailer(outputFormatContext);
if(writeTrailerResult < 0){
LOGE("av_write_trailer error: %s", stringForAVErrorNumber(writeTrailerResult));
}
LOGI("Wrote trailer");
}
For the full content of all the auxillary functions (the ones in camelCase), see my full project on Github. Got questions? I'm happy to elaborate.