Bug#707476: Fwd: [PATCH 1/2] deshake/stabilisation filters
Vittorio Giovara
vittorio.giovara at gmail.com
Fri Aug 15 12:43:32 UTC 2014
---------- Forwarded message ----------
From: Vittorio Giovara <vittorio.giovara at gmail.com>
Date: Fri, Aug 15, 2014 at 12:55 PM
Subject: [PATCH 1/2] deshake/stabilisation filters
To: libav-devel at libav.org
Cc: georg.martius at web.de
From: Georg Martius <martius at mis.mpg.de>
vid.stab support for libav consists of two filters:
vidstabdetect and vidstabtransform
Signed-off-by: Vittorio Giovara <vittorio.giovara at gmail.com>
---
Here is the filter kindly provided by the author, please keep him in CC.
I just added version bump, changelog entry and a few minor cosmetics.
Cheers,
Vittorio
Changelog | 1 +
configure | 7 +
doc/filters.texi | 250 +++++++++++++++++++++++++++++++
libavfilter/Makefile | 4 +
libavfilter/allfilters.c | 2 +
libavfilter/version.h | 2 +-
libavfilter/vf_vidstabdetect.c | 227 ++++++++++++++++++++++++++++
libavfilter/vf_vidstabtransform.c | 306 ++++++++++++++++++++++++++++++++++++++
libavfilter/vidstabutils.c | 84 +++++++++++
libavfilter/vidstabutils.h | 36 +++++
10 files changed, 918 insertions(+), 1 deletion(-)
create mode 100644 libavfilter/vf_vidstabdetect.c
create mode 100644 libavfilter/vf_vidstabtransform.c
create mode 100644 libavfilter/vidstabutils.c
create mode 100644 libavfilter/vidstabutils.h
diff --git a/Changelog b/Changelog
index ea9d721..884394c 100644
--- a/Changelog
+++ b/Changelog
@@ -31,6 +31,7 @@ version <next>:
- Icecast protocol
- request icecast metadata by default
- support for using metadata in stream specifiers in avtools
+- deshake/stabilisation filters
version 10:
diff --git a/configure b/configure
index c073f30..128f974 100755
--- a/configure
+++ b/configure
@@ -199,6 +199,7 @@ External library support:
--enable-libspeex enable Speex de/encoding via libspeex [no]
--enable-libtheora enable Theora encoding via libtheora [no]
--enable-libtwolame enable MP2 encoding via libtwolame [no]
+ --enable-libvidstab enable video stabilization using vid.stab [no]
--enable-libvo-aacenc enable AAC encoding via libvo-aacenc [no]
--enable-libvo-amrwbenc enable AMR-WB encoding via libvo-amrwbenc [no]
--enable-libvorbis enable Vorbis encoding via libvorbis [no]
@@ -1159,6 +1160,7 @@ EXTERNAL_LIBRARY_LIST="
libspeex
libtheora
libtwolame
+ libvidstab
libvo_aacenc
libvo_amrwbenc
libvorbis
@@ -2142,6 +2144,9 @@ interlace_filter_deps="gpl"
ocv_filter_deps="libopencv"
resample_filter_deps="avresample"
scale_filter_deps="swscale"
+vidstabdetect_filter_deps="libvidstab"
+vidstabtransform_filter_deps="libvidstab"
+
# examples
avcodec_example_deps="avcodec avutil"
@@ -3705,6 +3710,7 @@ die_license_disabled() {
}
die_license_disabled gpl libcdio
+die_license_disabled gpl libvidstab
die_license_disabled gpl libx264
die_license_disabled gpl libx265
die_license_disabled gpl libxavs
@@ -4160,6 +4166,7 @@ enabled libschroedinger && require_pkg_config
schroedinger-1.0 schroedinger/sc
enabled libspeex && require_pkg_config speex speex/speex.h
speex_decoder_init -lspeex
enabled libtheora && require libtheora theora/theoraenc.h
th_info_init -ltheoraenc -ltheoradec -logg
enabled libtwolame && require libtwolame twolame.h
twolame_init -ltwolame
+enabled libvidstab && require_pkg_config vidstab
vid.stab/libvidstab.h vsMotionDetectInit
enabled libvo_aacenc && require libvo_aacenc vo-aacenc/voAAC.h
voGetAACEncAPI -lvo-aacenc
enabled libvo_amrwbenc && require libvo_amrwbenc
vo-amrwbenc/enc_if.h E_IF_init -lvo-amrwbenc
enabled libvorbis && require libvorbis vorbis/vorbisenc.h
vorbis_info_init -lvorbisenc -lvorbis -logg
diff --git a/doc/filters.texi b/doc/filters.texi
index 28e3292..35606e7 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -2752,6 +2752,256 @@ unsharp=7:7:-2:7:7:-2
./avconv -i in.avi -vf "unsharp" out.mp4
@end example
+
+ at anchor{vidstabdetect}
+ at section vidstabdetect
+
+Analyze video stabilization/deshaking. Perform pass 1 of 2, see
+ at ref{vidstabtransform} for pass 2.
+
+This filter generates a file with relative translation and rotation
+transform information about subsequent frames, which is then used by
+the @ref{vidstabtransform} filter.
+
+To enable compilation of this filter you need to configure Libav with
+ at code{--enable-libvidstab}.
+
+This filter accepts the following options:
+
+ at table @option
+ at item result
+Set the path to the file used to write the transforms information.
+Default value is @file{transforms.trf}.
+
+ at item shakiness
+Set how shaky the video is and how quick the camera is. It accepts an
+integer in the range 1-10, a value of 1 means little shakiness, a
+value of 10 means strong shakiness. Default value is 5.
+
+ at item accuracy
+Set the accuracy of the detection process. It must be a value in the
+range 1-15. A value of 1 means low accuracy, a value of 15 means high
+accuracy. Default value is 15.
+
+ at item stepsize
+Set stepsize of the search process. The region around minimum is
+scanned with 1 pixel resolution. Default value is 6.
+
+ at item mincontrast
+Set minimum contrast. Below this value a local measurement field is
+discarded. Must be a floating point value in the range 0-1. Default
+value is 0.3.
+
+ at item tripod
+Set reference frame number for tripod mode.
+
+If enabled, the motion of the frames is compared to a reference frame
+in the filtered stream, identified by the specified number. The idea
+is to compensate all movements in a more-or-less static scene and keep
+the camera view absolutely still.
+
+If set to 0, it is disabled. The frames are counted starting from 1.
+
+ at item show
+Show fields and transforms in the resulting frames. It accepts an
+integer in the range 0-2. Default value is 0, which disables any
+visualization.
+ at end table
+
+ at subsection Examples
+
+ at itemize
+ at item
+Use default values:
+ at example
+vidstabdetect
+ at end example
+
+ at item
+Analyze strongly shaky movie and put the results in file
+ at file{mytransforms.trf}:
+ at example
+vidstabdetect=shakiness=10:accuracy=15:result="mytransforms.trf"
+ at end example
+
+ at item
+Visualize the result of internal transformations in the resulting
+video:
+ at example
+vidstabdetect=show=1
+ at end example
+
+ at item
+Analyze a video with medium shakiness using @command{avconv}:
+ at example
+avconv -i input -vf vidstabdetect=shakiness=5:show=1 dummy.avi
+ at end example
+ at end itemize
+
+ at anchor{vidstabtransform}
+ at section vidstabtransform
+
+Video stabilization/deshaking: pass 2 of 2,
+see @ref{vidstabdetect} for pass 1.
+
+Read a file with transform information for each frame and
+apply/compensate them. Together with the @ref{vidstabdetect}
+filter this can be used to deshake videos. See also
+ at url{http://public.hronopik.de/vid.stab}. It is important to also use
+the unsharp filter, see below.
+
+To enable compilation of this filter you need to configure Libav with
+ at code{--enable-libvidstab}.
+
+This filter accepts the following options:
+
+ at table @option
+
+ at item input
+path to the file used to read the transforms (default: @file{transforms.trf})
+
+ at item smoothing
+Set the number of frames (value * 2 + 1) used for lowpass filtering the camera
+movements (default: 10). For example a number of 10 means that 21 frames are
+used (10 in the past and 10 in the future) to smoothen the motion in the
+video. A larger values leads to a smoother video, but limits the acceleration
+of the camera (pan/tilt movements).
+0 is a special case where a static camera is simulated.
+
+ at item optalgo
+Set the camera path optimization algorithm:
+ at table @samp
+ at item gauss
+gaussian kernel low-pass filter on camera motion (default)
+ at item avg
+averaging on transformations
+ at end table
+
+ at item maxshift
+maximal number of pixels to translate frames (default: -1 no limit)
+
+ at item maxangle
+maximal angle in radians (degree*PI/180) to rotate frames (default: -1
+no limit)
+
+ at item crop
+How to deal with borders that may be visible due to movement
+compensation. Available values are:
+
+ at table @samp
+ at item keep
+keep image information from previous frame (default)
+ at item black
+fill the border black
+ at end table
+
+ at item invert
+ at table @samp
+ at item 0
+keep transforms normal (default)
+ at item 1
+invert transforms
+ at end table
+
+ at item relative
+consider transforms as
+ at table @samp
+ at item 0
+absolute
+ at item 1
+relative to previous frame (default)
+ at end table
+
+ at item zoom
+Set percentage to zoom (default: 0)
+ at table @samp
+ at item >0
+zoom in
+ at item <0
+zoom out
+ at end table
+
+ at item optzoom
+Set optimal zooming to avoid borders
+ at table @samp
+ at item 0
+disabled
+ at item 1
+optimal static zoom value is determined (only very strong movements will lead
+to visible borders) (default)
+ at item 2
+optimal adaptive zoom value is determined (no borders will be visible),
+see @option{zoomspeed}
+ at end table
+Note that the value given at zoom is added to the one calculated
+here.
+
+ at item zoomspeed
+Set percent to zoom maximally each frame (for @option{optzoom=2}).
+Range is from 0 to 5, default value is 0.2
+
+ at item interpol
+type of interpolation
+
+Available values are:
+ at table @samp
+ at item no
+no interpolation
+ at item linear
+linear only horizontal
+ at item bilinear
+linear in both directions (default)
+ at item bicubic
+cubic in both directions (slow)
+ at end table
+
+ at item tripod
+virtual tripod mode means that the video is stabilized such that the
+camera stays stationary. Use also @code{tripod} option of
+ at ref{vidstabdetect}.
+ at table @samp
+ at item 0
+off (default)
+ at item 1
+virtual tripod mode: equivalent to @code{relative=0:smoothing=0}
+ at end table
+
+ at item debug
+Increase log verbosity of set to 1. Also the detected global motions are
+written to the temporary file @file{global_motions.trf}.
+ at table @samp
+ at item 0
+disabled (default)
+ at item 1
+enabled
+ at end table
+
+ at end table
+
+ at subsection Examples
+
+ at itemize
+ at item
+typical call with default default values:
+ (note the unsharp filter which is always recommended)
+ at example
+avconv -i inp.mpeg -vf vidstabtransform,unsharp=5:5:0.8:3:3:0.4
inp_stabilized.mpeg
+ at end example
+
+ at item
+zoom in a bit more and load transform data from a given file
+ at example
+vidstabtransform=zoom=5:input="mytransforms.trf"
+ at end example
+
+ at item
+smoothen the video even more
+ at example
+vidstabtransform=smoothing=30
+ at end example
+
+ at end itemize
+
@section vflip
Flip the input video vertically.
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 7b94f22..1a86cb5 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -80,6 +80,8 @@ OBJS-$(CONFIG_SPLIT_FILTER) += split.o
OBJS-$(CONFIG_TRANSPOSE_FILTER) += vf_transpose.o
OBJS-$(CONFIG_TRIM_FILTER) += trim.o
OBJS-$(CONFIG_UNSHARP_FILTER) += vf_unsharp.o
+OBJS-$(CONFIG_VIDSTABDETECT_FILTER) += vidstabutils.o
vf_vidstabdetect.o
+OBJS-$(CONFIG_VIDSTABTRANSFORM_FILTER) += vidstabutils.o
vf_vidstabtransform.o
OBJS-$(CONFIG_VFLIP_FILTER) += vf_vflip.o
OBJS-$(CONFIG_YADIF_FILTER) += vf_yadif.o
@@ -94,5 +96,7 @@ OBJS-$(CONFIG_NULLSINK_FILTER) +=
vsink_nullsink.o
OBJS-$(HAVE_THREADS) += pthread.o
+SKIPHEADERS-$(CONFIG_LIBVIDSTAB) += vidstabutils.h
+
TOOLS = graph2dot
TESTPROGS = filtfmts
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index 67a298d..542cad4 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -105,6 +105,8 @@ void avfilter_register_all(void)
REGISTER_FILTER(TRANSPOSE, transpose, vf);
REGISTER_FILTER(TRIM, trim, vf);
REGISTER_FILTER(UNSHARP, unsharp, vf);
+ REGISTER_FILTER(VIDSTABDETECT, vidstabdetect, vf);
+ REGISTER_FILTER(VIDSTABTRANSFORM, vidstabtransform, vf);
REGISTER_FILTER(VFLIP, vflip, vf);
REGISTER_FILTER(YADIF, yadif, vf);
diff --git a/libavfilter/version.h b/libavfilter/version.h
index 70b08e5..8207811 100644
--- a/libavfilter/version.h
+++ b/libavfilter/version.h
@@ -30,7 +30,7 @@
#include "libavutil/version.h"
#define LIBAVFILTER_VERSION_MAJOR 5
-#define LIBAVFILTER_VERSION_MINOR 0
+#define LIBAVFILTER_VERSION_MINOR 1
#define LIBAVFILTER_VERSION_MICRO 0
#define LIBAVFILTER_VERSION_INT AV_VERSION_INT(LIBAVFILTER_VERSION_MAJOR, \
diff --git a/libavfilter/vf_vidstabdetect.c b/libavfilter/vf_vidstabdetect.c
new file mode 100644
index 0000000..33f6f9a
--- /dev/null
+++ b/libavfilter/vf_vidstabdetect.c
@@ -0,0 +1,227 @@
+/*
+ * Copyright (c) 2013 Georg Martius <georg dot martius at web dot de>
+ *
+ * This file is part of Libav.
+ *
+ * Libav is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * Libav is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with Libav; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <vid.stab/libvidstab.h>
+
+#include "libavutil/common.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/opt.h"
+
+#include "avfilter.h"
+#include "formats.h"
+#include "internal.h"
+
+#include "vidstabutils.h"
+
+#define DEFAULT_RESULT_NAME "transforms.trf"
+
+typedef struct {
+ const AVClass *class;
+
+ VSMotionDetect md;
+ VSMotionDetectConfig conf;
+
+ char *result;
+ FILE *f;
+} StabData;
+
+static av_cold int init(AVFilterContext *ctx)
+{
+ ff_vs_init();
+ av_log(ctx, AV_LOG_VERBOSE, "vidstabdetect filter: init %s\n",
+ LIBVIDSTAB_VERSION);
+ return 0;
+}
+
+static av_cold void uninit(AVFilterContext *ctx)
+{
+ StabData *sd = ctx->priv;
+ VSMotionDetect *md = &(sd->md);
+
+ if (sd->f) {
+ fclose(sd->f);
+ sd->f = NULL;
+ }
+
+ vsMotionDetectionCleanup(md);
+}
+
+static int query_formats(AVFilterContext *ctx)
+{
+ // If you add something here also add it in vidstabutils.c
+ static const enum AVPixelFormat pix_fmts[] = {
+ AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUV420P,
+ AV_PIX_FMT_YUV411P, AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUVA420P,
+ AV_PIX_FMT_YUV440P, AV_PIX_FMT_GRAY8,
+ AV_PIX_FMT_RGB24, AV_PIX_FMT_BGR24, AV_PIX_FMT_RGBA,
+ AV_PIX_FMT_NONE
+ };
+
+ ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
+ return 0;
+}
+
+static int config_input(AVFilterLink *inlink)
+{
+ AVFilterContext *ctx = inlink->dst;
+ StabData *sd = ctx->priv;
+
+ VSMotionDetect* md = &(sd->md);
+ VSFrameInfo fi;
+ const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
+
+ vsFrameInfoInit(&fi, inlink->w, inlink->h,
+ ff_av2vs_pixfmt(ctx, inlink->format));
+ if (fi.bytesPerPixel != av_get_bits_per_pixel(desc) / 8) {
+ av_log(ctx, AV_LOG_ERROR,
+ "pixel-format error: wrong bits/per/pixel, please
report a BUG");
+ return AVERROR(EINVAL);
+ }
+ if (fi.log2ChromaW != desc->log2_chroma_w) {
+ av_log(ctx, AV_LOG_ERROR,
+ "pixel-format error: log2_chroma_w, please report a BUG");
+ return AVERROR(EINVAL);
+ }
+
+ if (fi.log2ChromaH != desc->log2_chroma_h) {
+ av_log(ctx, AV_LOG_ERROR,
+ "pixel-format error: log2_chroma_h, please report a BUG");
+ return AVERROR(EINVAL);
+ }
+
+ // set values that are not initialized by the options
+ sd->conf.algo = 1;
+ sd->conf.modName = "vidstabdetect";
+ if (vsMotionDetectInit(md, &sd->conf, &fi) != VS_OK) {
+ av_log(ctx, AV_LOG_ERROR,
+ "initialization of Motion Detection failed, please
report a BUG");
+ return AVERROR(EINVAL);
+ }
+
+ vsMotionDetectGetConfig(&sd->conf, md);
+ av_log(ctx, AV_LOG_INFO, "Video stabilization settings (pass 1/2):\n");
+ av_log(ctx, AV_LOG_INFO, " shakiness = %d\n", sd->conf.shakiness);
+ av_log(ctx, AV_LOG_INFO, " accuracy = %d\n", sd->conf.accuracy);
+ av_log(ctx, AV_LOG_INFO, " stepsize = %d\n", sd->conf.stepSize);
+ av_log(ctx, AV_LOG_INFO, " mincontrast = %f\n",
sd->conf.contrastThreshold);
+ av_log(ctx, AV_LOG_INFO, " tripod = %d\n", sd->conf.virtualTripod);
+ av_log(ctx, AV_LOG_INFO, " show = %d\n", sd->conf.show);
+ av_log(ctx, AV_LOG_INFO, " result = %s\n", sd->result);
+
+ sd->f = fopen(sd->result, "w");
+ if (sd->f == NULL) {
+ av_log(ctx, AV_LOG_ERROR, "cannot open transform file %s\n",
sd->result);
+ return AVERROR(EINVAL);
+ } else {
+ if (vsPrepareFile(md, sd->f) != VS_OK) {
+ av_log(ctx, AV_LOG_ERROR, "cannot write to transform file %s\n",
+ sd->result);
+ return AVERROR(EINVAL);
+ }
+ }
+ return 0;
+}
+
+static int filter_frame(AVFilterLink *inlink, AVFrame *in)
+{
+ AVFilterContext *ctx = inlink->dst;
+ StabData *sd = ctx->priv;
+ VSMotionDetect *md = &(sd->md);
+ LocalMotions localmotions;
+
+ AVFilterLink *outlink = inlink->dst->outputs[0];
+ VSFrame frame;
+ int plane;
+
+ if (sd->conf.show > 0 && !av_frame_is_writable(in))
+ av_frame_make_writable(in);
+
+ for (plane = 0; plane < md->fi.planes; plane++) {
+ frame.data[plane] = in->data[plane];
+ frame.linesize[plane] = in->linesize[plane];
+ }
+ if (vsMotionDetection(md, &localmotions, &frame) != VS_OK) {
+ av_log(ctx, AV_LOG_ERROR, "motion detection failed");
+ return AVERROR(AVERROR_UNKNOWN);
+ } else {
+ if (vsWriteToFile(md, sd->f, &localmotions) != VS_OK) {
+ av_log(ctx, AV_LOG_ERROR, "cannot write to transform file");
+ return AVERROR(errno);
+ }
+ vs_vector_del(&localmotions);
+ }
+
+ return ff_filter_frame(outlink, in);
+}
+
+#define OFFSET(x) offsetof(StabData, x)
+#define OFFSETC(x) (offsetof(StabData, conf) +
offsetof(VSMotionDetectConfig, x))
+#define FLAGS AV_OPT_FLAG_VIDEO_PARAM
+static const AVOption vidstabdetect_options[] = {
+ { "result", "path to the file used to write the transforms",
OFFSET(result), AV_OPT_TYPE_STRING, { .str =
DEFAULT_RESULT_NAME }, FLAGS },
+ { "shakiness", "how shaky is the video and how quick is the camera?"
+ "1: little (fast) 10: very strong/quick (slow)",
OFFSETC(shakiness), AV_OPT_TYPE_INT, { .i64 = 5
}, 1, 10, FLAGS },
+ { "accuracy", "(>=shakiness) 1: low 15: high (slow)",
OFFSETC(accuracy), AV_OPT_TYPE_INT, { .i64 = 15
}, 1, 15, FLAGS },
+ { "stepsize", "region around minimum is scanned with 1 pixel
resolution", OFFSETC(stepSize), AV_OPT_TYPE_INT, { .i64 =
6 }, 1, 32, FLAGS },
+ { "mincontrast", "below this contrast a field is discarded
(0-1)", OFFSETC(contrastThreshold), AV_OPT_TYPE_DOUBLE, {
.dbl = 0.25 }, 0.0, 1.0, FLAGS },
+ { "show", "0: draw nothing; 1,2: show fields and
transforms", OFFSETC(show), AV_OPT_TYPE_INT, {
.i64 = 0 }, 0, 2, FLAGS },
+ { "tripod", "virtual tripod mode (if >0): motion is compared to a"
+ "reference frame (frame # is the value)",
OFFSETC(virtualTripod), AV_OPT_TYPE_INT, { .i64 = 0
}, 0, INT_MAX, FLAGS },
+ { NULL }
+};
+
+static const AVClass class = {
+ .class_name = "vidstabdetect",
+ .item_name = av_default_item_name,
+ .option = vidstabdetect_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+static const AVFilterPad inputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ .filter_frame = filter_frame,
+ .config_props = config_input,
+ },
+ { NULL }
+};
+
+static const AVFilterPad outputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ },
+ { NULL }
+};
+
+AVFilter ff_vf_vidstabdetect = {
+ .name = "vidstabdetect",
+ .description = NULL_IF_CONFIG_SMALL("Extract relative transformations, "
+ "pass 1 of 2 for stabilization "
+ "(see vidstabtransform for
pass 2)."),
+ .priv_size = sizeof(StabData),
+ .init = init,
+ .uninit = uninit,
+ .query_formats = query_formats,
+ .inputs = inputs,
+ .outputs = outputs,
+ .priv_class = &class,
+};
diff --git a/libavfilter/vf_vidstabtransform.c
b/libavfilter/vf_vidstabtransform.c
new file mode 100644
index 0000000..4e7ea86
--- /dev/null
+++ b/libavfilter/vf_vidstabtransform.c
@@ -0,0 +1,306 @@
+/*
+ * Copyright (c) 2013 Georg Martius <georg dot martius at web dot de>
+ *
+ * This file is part of Libav.
+ *
+ * Libav is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * Libav is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with Libav; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <vid.stab/libvidstab.h>
+
+#include "libavutil/common.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/opt.h"
+
+#include "avfilter.h"
+#include "formats.h"
+#include "internal.h"
+#include "video.h"
+
+#include "vidstabutils.h"
+
+#define DEFAULT_INPUT_NAME "transforms.trf"
+
+typedef struct {
+ const AVClass *class;
+
+ VSTransformData td;
+ VSTransformConfig conf;
+
+ VSTransformations trans; // transformations
+ char *input; // name of transform file
+ int tripod;
+ int debug;
+} TransformContext;
+
+static av_cold int init(AVFilterContext *ctx)
+{
+ ff_vs_init();
+ av_log(ctx, AV_LOG_VERBOSE, "vidstabtransform filter: init %s\n",
+ LIBVIDSTAB_VERSION);
+ return 0;
+}
+
+static av_cold void uninit(AVFilterContext *ctx)
+{
+ TransformContext *tc = ctx->priv;
+
+ vsTransformDataCleanup(&tc->td);
+ vsTransformationsCleanup(&tc->trans);
+}
+
+static int query_formats(AVFilterContext *ctx)
+{
+ // If you add something here also add it in vidstabutils.c
+ static const enum AVPixelFormat pix_fmts[] = {
+ AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUV420P,
+ AV_PIX_FMT_YUV411P, AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUVA420P,
+ AV_PIX_FMT_YUV440P, AV_PIX_FMT_GRAY8,
+ AV_PIX_FMT_RGB24, AV_PIX_FMT_BGR24, AV_PIX_FMT_RGBA,
+ AV_PIX_FMT_NONE
+ };
+
+ ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
+ return 0;
+}
+
+
+static int config_input(AVFilterLink *inlink)
+{
+ AVFilterContext *ctx = inlink->dst;
+ TransformContext *tc = ctx->priv;
+ FILE *f;
+
+ const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
+
+ VSTransformData *td = &(tc->td);
+
+ VSFrameInfo fi_src;
+ VSFrameInfo fi_dest;
+
+ if (!vsFrameInfoInit(&fi_src, inlink->w, inlink->h,
+ ff_av2vs_pixfmt(ctx, inlink->format)) ||
+ !vsFrameInfoInit(&fi_dest, inlink->w, inlink->h,
+ ff_av2vs_pixfmt(ctx, inlink->format))) {
+ av_log(ctx, AV_LOG_ERROR, "unknown pixel format: %i (%s)",
+ inlink->format, desc->name);
+ return AVERROR(EINVAL);
+ }
+
+ if (fi_src.bytesPerPixel != av_get_bits_per_pixel(desc) / 8 ||
+ fi_src.log2ChromaW != desc->log2_chroma_w ||
+ fi_src.log2ChromaH != desc->log2_chroma_h) {
+ av_log(ctx, AV_LOG_ERROR, "pixel-format error: bpp %i<>%i ",
+ fi_src.bytesPerPixel, av_get_bits_per_pixel(desc) / 8);
+ av_log(ctx, AV_LOG_ERROR, "chroma_subsampl: w: %i<>%i h: %i<>%i\n",
+ fi_src.log2ChromaW, desc->log2_chroma_w,
+ fi_src.log2ChromaH, desc->log2_chroma_h);
+ return AVERROR(EINVAL);
+ }
+
+ // set values that are not initializes by the options
+ tc->conf.modName = "vidstabtransform";
+ tc->conf.verbose = 1 + tc->debug;
+ if (tc->tripod) {
+ av_log(ctx, AV_LOG_INFO, "Virtual tripod mode: relative=0,
smoothing=0\n");
+ tc->conf.relative = 0;
+ tc->conf.smoothing = 0;
+ }
+ tc->conf.simpleMotionCalculation = 0;
+ tc->conf.storeTransforms = tc->debug;
+ tc->conf.smoothZoom = 0;
+
+ if (vsTransformDataInit(td, &tc->conf, &fi_src, &fi_dest) != VS_OK) {
+ av_log(ctx, AV_LOG_ERROR,
+ "initialization of vid.stab transform failed, please
report a BUG\n");
+ return AVERROR(EINVAL);
+ }
+
+ vsTransformGetConfig(&tc->conf, td);
+ av_log(ctx, AV_LOG_INFO, "Video transformation/stabilization
settings (pass 2/2):\n");
+ av_log(ctx, AV_LOG_INFO, " input = %s\n", tc->input);
+ av_log(ctx, AV_LOG_INFO, " smoothing = %d\n", tc->conf.smoothing);
+ av_log(ctx, AV_LOG_INFO, " optalgo = %s\n",
+ tc->conf.camPathAlgo == VSOptimalL1 ? "opt" :
+ (tc->conf.camPathAlgo == VSGaussian ? "gauss" : "avg" ));
+ av_log(ctx, AV_LOG_INFO, " maxshift = %d\n", tc->conf.maxShift);
+ av_log(ctx, AV_LOG_INFO, " maxangle = %f\n", tc->conf.maxAngle);
+ av_log(ctx, AV_LOG_INFO, " crop = %s\n", tc->conf.crop ? "Black"
+ : "Keep");
+ av_log(ctx, AV_LOG_INFO, " relative = %s\n", tc->conf.relative ? "True"
+
: "False");
+ av_log(ctx, AV_LOG_INFO, " invert = %s\n", tc->conf.invert ? "True"
+
: "False");
+ av_log(ctx, AV_LOG_INFO, " zoom = %f\n", tc->conf.zoom);
+ av_log(ctx, AV_LOG_INFO, " optzoom = %s\n",
+ tc->conf.optZoom == 1 ? "Static (1)"
+ : (tc->conf.optZoom == 2 ? "Dynamic (2)"
+ : "Off (0)"));
+ if (tc->conf.optZoom == 2)
+ av_log(ctx, AV_LOG_INFO, " zoomspeed = %g\n", tc->conf.zoomSpeed);
+
+ av_log(ctx, AV_LOG_INFO, " interpol = %s\n",
+ getInterpolationTypeName(tc->conf.interpolType));
+
+ f = fopen(tc->input, "r");
+ if (f == NULL) {
+ av_log(ctx, AV_LOG_ERROR, "cannot open input file %s\n", tc->input);
+ return AVERROR(errno);
+ } else {
+ VSManyLocalMotions mlms;
+ if (vsReadLocalMotionsFile(f, &mlms) == VS_OK) {
+ // calculate the actual transforms from the local motions
+ if (vsLocalmotions2Transforms(td, &mlms, &tc->trans) != VS_OK) {
+ av_log(ctx, AV_LOG_ERROR, "calculating
transformations failed\n");
+ return AVERROR(EINVAL);
+ }
+ } else { // try to read old format
+ if (!vsReadOldTransforms(td, f, &tc->trans)) { /* read
input file */
+ av_log(ctx, AV_LOG_ERROR, "error parsing input file
%s\n", tc->input);
+ return AVERROR(EINVAL);
+ }
+ }
+ }
+ fclose(f);
+
+ if (vsPreprocessTransforms(td, &tc->trans) != VS_OK ) {
+ av_log(ctx, AV_LOG_ERROR, "error while preprocessing transforms\n");
+ return AVERROR(EINVAL);
+ }
+
+ // TODO: add sharpening, so far the user needs to call the
unsharp filter manually
+ return 0;
+}
+
+
+static int filter_frame(AVFilterLink *inlink, AVFrame *in)
+{
+ AVFilterContext *ctx = inlink->dst;
+ TransformContext *tc = ctx->priv;
+ VSTransformData* td = &(tc->td);
+
+ AVFilterLink *outlink = inlink->dst->outputs[0];
+ int direct = 0;
+ AVFrame *out;
+ VSFrame inframe;
+ int plane;
+
+ if (av_frame_is_writable(in)) {
+ direct = 1;
+ out = in;
+ } else {
+ out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
+ if (!out) {
+ av_frame_free(&in);
+ return AVERROR(ENOMEM);
+ }
+ av_frame_copy_props(out, in);
+ }
+
+ for (plane = 0; plane < vsTransformGetSrcFrameInfo(td)->planes; plane++) {
+ inframe.data[plane] = in->data[plane];
+ inframe.linesize[plane] = in->linesize[plane];
+ }
+ if (direct) {
+ vsTransformPrepare(td, &inframe, &inframe);
+ } else { // separate frames
+ VSFrame outframe;
+ for (plane = 0; plane <
vsTransformGetDestFrameInfo(td)->planes; plane++) {
+ outframe.data[plane] = out->data[plane];
+ outframe.linesize[plane] = out->linesize[plane];
+ }
+ vsTransformPrepare(td, &inframe, &outframe);
+ }
+
+ vsDoTransform(td, vsGetNextTransform(td, &tc->trans));
+
+ vsTransformFinish(td);
+
+ if (!direct)
+ av_frame_free(&in);
+
+ return ff_filter_frame(outlink, out);
+}
+
+#define OFFSET(x) offsetof(TransformContext, x)
+#define OFFSETC(x) (offsetof(TransformContext,
conf)+offsetof(VSTransformConfig, x))
+#define FLAGS AV_OPT_FLAG_VIDEO_PARAM
+
+static const AVOption vidstabtransform_options[] = {
+ { "input", "path to the file storing the transforms",
OFFSET(input), AV_OPT_TYPE_STRING, { .str =
DEFAULT_INPUT_NAME }, FLAGS },
+ { "smoothing", "number of frames*2 + 1 used for lowpass
filtering", OFFSETC(smoothing), AV_OPT_TYPE_INT,
{ .i64 = 15 }, 0, 1000, FLAGS },
+ { "optalgo", "camera path optimization algo",
OFFSETC(camPathAlgo), AV_OPT_TYPE_INT, { .i64 =
VSOptimalL1 }, VSOptimalL1, VSAvg, FLAGS, "optalgo" },
+// from version 1.0 onward
+// { "opt", "global optimization",
0, AV_OPT_TYPE_CONST, {.i64 = VSOptimalL1 }, 0,
0, FLAGS, "optalgo" },
+ { "gauss", "gaussian kernel",
0, AV_OPT_TYPE_CONST, {.i64 = VSGaussian }, 0,
0, FLAGS, "optalgo" },
+ { "avg", "simple averaging on motion",
0, AV_OPT_TYPE_CONST, {.i64 = VSAvg }, 0,
0, FLAGS, "optalgo" },
+ { "maxshift", "maximal number of pixels to translate image",
OFFSETC(maxShift), AV_OPT_TYPE_INT, { .i64 =
-1 }, -1, 500, FLAGS },
+ { "maxangle", "maximal angle in rad to rotate image",
OFFSETC(maxAngle), AV_OPT_TYPE_DOUBLE, { .dbl =
-1.0 }, -1.0, 3.14, FLAGS },
+ { "crop", "set cropping mode",
OFFSETC(crop), AV_OPT_TYPE_INT, { .i64 = 0
}, 0, 1, FLAGS, "crop" },
+ { "keep", "keep border",
0, AV_OPT_TYPE_CONST, {.i64 = VSKeepBorder }, 0,
0, FLAGS, "crop" },
+ { "black", "black border",
0, AV_OPT_TYPE_CONST, {.i64 = VSCropBorder }, 0,
0, FLAGS, "crop" },
+ { "invert", "1: invert transforms",
OFFSETC(invert), AV_OPT_TYPE_INT, { .i64 = 0
}, 0, 1, FLAGS },
+ { "relative", "consider transforms as 0: absolute, 1: relative",
OFFSETC(relative), AV_OPT_TYPE_INT, { .i64 = 1
}, 0, 1, FLAGS },
+ { "zoom", "percentage to zoom >0: zoom in, <0 zoom out",
OFFSETC(zoom), AV_OPT_TYPE_DOUBLE, { .dbl = 0
}, -100, 100, FLAGS },
+ { "optzoom", "0: nothing, 1: optimal static zoom, 2: optimal
dynamic zoom", OFFSETC(optZoom), AV_OPT_TYPE_INT, { .i64
= 1 }, 0, 2, FLAGS },
+ { "zoomspeed", "for adative zoom: percent to zoom maximally each
frame", OFFSETC(zoomSpeed), AV_OPT_TYPE_DOUBLE, { .dbl =
0.25 }, 0, 5, FLAGS },
+ { "interpol", "type of interpolation",
OFFSETC(interpolType), AV_OPT_TYPE_INT, { .i64 = 2
}, 0, 3, FLAGS, "interpol" },
+ { "no", "no interpolation",
0, AV_OPT_TYPE_CONST, {.i64 = VS_Zero }, 0,
0, FLAGS, "interpol" },
+ { "linear", "linear (horizontal)",
0, AV_OPT_TYPE_CONST, {.i64 = VS_Linear }, 0,
0, FLAGS, "interpol" },
+ { "bilinear", "bi-linear",
0, AV_OPT_TYPE_CONST, {.i64 = VS_BiLinear }, 0,
0, FLAGS, "interpol" },
+ { "bicubic", "bi-cubic",
0, AV_OPT_TYPE_CONST, {.i64 = VS_BiCubic }, 0,
0, FLAGS, "interpol" },
+ { "tripod", "if 1: virtual tripod mode (equiv. to
relative=0:smoothing=0)", OFFSET(tripod), AV_OPT_TYPE_INT,
{ .i64 = 0 }, 0, 1, FLAGS },
+ { "debug", "if 1: more output printed and global motions are
stored to file", OFFSET(debug), AV_OPT_TYPE_INT, { .i64 =
0 }, 0, 1, FLAGS },
+ { NULL }
+};
+
+static const AVClass class = {
+ .class_name = "vidstabtransform",
+ .item_name = av_default_item_name,
+ .option = vidstabtransform_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+static const AVFilterPad inputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ .filter_frame = filter_frame,
+ .config_props = config_input,
+ },
+ { NULL }
+};
+
+static const AVFilterPad outputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ },
+ { NULL }
+};
+
+AVFilter ff_vf_vidstabtransform = {
+ .name = "vidstabtransform",
+ .description = NULL_IF_CONFIG_SMALL("Transform the frames, "
+ "pass 2 of 2 for stabilization "
+ "(see vidstabdetect for pass 1)."),
+ .priv_size = sizeof(TransformContext),
+ .init = init,
+ .uninit = uninit,
+ .query_formats = query_formats,
+ .inputs = inputs,
+ .outputs = outputs,
+ .priv_class = &class,
+};
diff --git a/libavfilter/vidstabutils.c b/libavfilter/vidstabutils.c
new file mode 100644
index 0000000..30a476a
--- /dev/null
+++ b/libavfilter/vidstabutils.c
@@ -0,0 +1,84 @@
+/*
+ * Copyright (c) 2013 Georg Martius <georg dot martius at web dot de>
+ *
+ * This file is part of Libav.
+ *
+ * Libav is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * Libav is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with Libav; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "vidstabutils.h"
+
+/** convert AV's pixelformat to vid.stab pixelformat */
+VSPixelFormat ff_av2vs_pixfmt(AVFilterContext *ctx, enum AVPixelFormat pf)
+{
+ switch (pf) {
+ case AV_PIX_FMT_YUV420P: return PF_YUV420P;
+ case AV_PIX_FMT_YUV422P: return PF_YUV422P;
+ case AV_PIX_FMT_YUV444P: return PF_YUV444P;
+ case AV_PIX_FMT_YUV410P: return PF_YUV410P;
+ case AV_PIX_FMT_YUV411P: return PF_YUV411P;
+ case AV_PIX_FMT_YUV440P: return PF_YUV440P;
+ case AV_PIX_FMT_YUVA420P: return PF_YUVA420P;
+ case AV_PIX_FMT_GRAY8: return PF_GRAY8;
+ case AV_PIX_FMT_RGB24: return PF_RGB24;
+ case AV_PIX_FMT_BGR24: return PF_BGR24;
+ case AV_PIX_FMT_RGBA: return PF_RGBA;
+ default:
+ av_log(ctx, AV_LOG_ERROR, "cannot deal with pixel format %i\n", pf);
+ return PF_NONE;
+ }
+}
+
+/** struct to hold a valid context for logging from within vid.stab lib */
+typedef struct {
+ const AVClass *class;
+} VS2AVLogCtx;
+
+/** wrapper to log vs_log into av_log */
+static int vs2av_log(int type, const char *tag, const char *format, ...)
+{
+ va_list ap;
+ VS2AVLogCtx ctx;
+ AVClass class = {
+ .class_name = tag,
+ .item_name = av_default_item_name,
+ .option = 0,
+ .version = LIBAVUTIL_VERSION_INT,
+ };
+ ctx.class = &class;
+ va_start(ap, format);
+ av_vlog(&ctx, type, format, ap);
+ va_end(ap);
+ return VS_OK;
+}
+
+/** sets the memory allocation function and logging constants to av versions */
+void ff_vs_init(void)
+{
+ vs_malloc = av_malloc;
+ vs_zalloc = av_mallocz;
+ vs_realloc = av_realloc;
+ vs_free = av_free;
+
+ VS_ERROR_TYPE = AV_LOG_ERROR;
+ VS_WARN_TYPE = AV_LOG_WARNING;
+ VS_INFO_TYPE = AV_LOG_INFO;
+ VS_MSG_TYPE = AV_LOG_VERBOSE;
+
+ vs_log = vs2av_log;
+
+ VS_ERROR = 0;
+ VS_OK = 1;
+}
diff --git a/libavfilter/vidstabutils.h b/libavfilter/vidstabutils.h
new file mode 100644
index 0000000..f07fabf
--- /dev/null
+++ b/libavfilter/vidstabutils.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) 2013 Georg Martius <georg dot martius at web dot de>
+ *
+ * This file is part of Libav.
+ *
+ * Libav is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * Libav is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with Libav; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef AVFILTER_VIDSTABUTILS_H
+#define AVFILTER_VIDSTABUTILS_H
+
+#include <vid.stab/libvidstab.h>
+
+#include "avfilter.h"
+
+/* *** some conversions from avlib to vid.stab constants and functions *** */
+
+/** converts the avpixelformat into one from the vid.stab library */
+VSPixelFormat ff_av2vs_pixfmt(AVFilterContext *ctx, enum AVPixelFormat pf);
+
+/** sets the memory allocation function and logging constants to av versions */
+void ff_vs_init(void);
+
+#endif
--
1.8.5.2 (Apple Git-48)
--
Vittorio
More information about the pkg-multimedia-maintainers
mailing list