Go to the documentation of this file.
37 #include <sys/types.h>
52 static uint64_t frame_cnt = 0;
57 fprintf(stderr,
"Error sending a packet for decoding: %s\n",
av_err2str(
ret));
70 fprintf(stderr,
"Error during decoding: %s\n",
av_err2str(
ret));
76 "#format: frame checksums\n"
80 "#media_type 0: video\n"
81 "#codec_id 0: rawvideo\n"
82 "#dimensions 0: 352x288\n"
84 "#stream#, dts, pts, duration, size, hash\n");
93 for (
int i = 0;
i <
frame->height;
i++)
95 for (
int i = 0;
i <
frame->height >>
desc->log2_chroma_h;
i++)
97 for (
int i = 0;
i <
frame->height >>
desc->log2_chroma_h;
i++)
101 printf(
"0, %10"PRId64
", %10"PRId64
", 1, %8d, %s\n",
102 frame_cnt, frame_cnt,
110 int main(
int argc,
char **argv)
115 unsigned int threads;
119 int nals = 0,
ret = 0;
123 fprintf(stderr,
"Usage: %s <threads> <input file>\n", argv[0]);
127 if (!(threads = strtoul(argv[1],
NULL, 0)))
146 fprintf(stderr,
"Codec not found\n");
152 fprintf(stderr,
"Could not allocate video codec context\n");
162 c->thread_count = threads;
165 fprintf(stderr,
"Could not open codec\n");
171 fprintf(stderr,
"Couldn't activate slice threading: %d\n",
c->active_thread_type);
176 fprintf(stderr,
"WARN: not using threads, only checking decoding slice NALUs\n");
180 fprintf(stderr,
"Could not allocate video frame\n");
185 if (!(file = fopen(argv[2],
"rb"))) {
186 fprintf(stderr,
"Couldn't open NALU file: %s\n", argv[2]);
193 size_t ret = fread(&
size, 1,
sizeof(uint16_t), file);
194 if (
ret !=
sizeof(uint16_t))
200 perror(
"Couldn't read data");
205 if (++nals >= threads) {
static AVCodecContext * dec_ctx
Filter the word “frame” indicates either a video frame or a group of audio as stored in an AVFrame structure Format for each input and each output the list of supported formats For video that means pixel format For audio that means channel sample they are references to shared objects When the negotiation mechanism computes the intersection of the formats supported at each end of a all references to both lists are replaced with a reference to the intersection And when a single format is eventually chosen for a link amongst the remaining all references to the list are updated That means that if a filter requires that its input and output have the same format amongst a supported all it has to do is use a reference to the same list of formats query_formats can leave some formats unset and return AVERROR(EAGAIN) to cause the negotiation mechanism toagain later. That can be used by filters with complex requirements to use the format negotiated on one link to set the formats supported on another. Frame references ownership and permissions
#define AV_HASH_MAX_SIZE
Maximum value that av_hash_get_size() will currently return.
const AVPixFmtDescriptor * av_pix_fmt_desc_get(enum AVPixelFormat pix_fmt)
#define AVERROR_EOF
End of file.
void av_frame_free(AVFrame **frame)
Free the frame and any dynamically allocated objects in it, e.g.
This structure describes decoded (raw) audio or video data.
void av_packet_free(AVPacket **pkt)
Free the packet, if the packet is reference counted, it will be unreferenced first.
AVFrame * av_frame_alloc(void)
Allocate an AVFrame and set its fields to default values.
static int decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt)
AVCodecContext * avcodec_alloc_context3(const AVCodec *codec)
Allocate an AVCodecContext and set its fields to default values.
int av_hash_alloc(AVHashContext **ctx, const char *name)
Allocate a hash context for the algorithm specified by name.
int attribute_align_arg avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame)
Return decoded output data from a decoder or encoder (when the AV_CODEC_FLAG_RECON_FRAME flag is used...
void av_hash_init(AVHashContext *ctx)
Initialize or reset a hash context.
void avcodec_free_context(AVCodecContext **avctx)
Free the codec context and everything associated with it and write NULL to the provided pointer.
int attribute_align_arg avcodec_open2(AVCodecContext *avctx, const AVCodec *codec, AVDictionary **options)
Initialize the AVCodecContext to use the given AVCodec.
void av_hash_update(AVHashContext *ctx, const uint8_t *src, size_t len)
Update a hash context with additional data.
Undefined Behavior In the C some operations are like signed integer dereferencing freed accessing outside allocated Undefined Behavior must not occur in a C it is not safe even if the output of undefined operations is unused The unsafety may seem nit picking but Optimizing compilers have in fact optimized code on the assumption that no undefined Behavior occurs Optimizing code based on wrong assumptions can and has in some cases lead to effects beyond the output of computations The signed integer overflow problem in speed critical code Code which is highly optimized and works with signed integers sometimes has the problem that often the output of the computation does not c
void av_hash_freep(AVHashContext **ctx)
Free hash context and set hash context pointer to NULL.
const AVCodec * avcodec_find_decoder(enum AVCodecID id)
Find a registered decoder with a matching codec ID.
#define av_err2str(errnum)
Convenience macro, the return value should be used only directly in function arguments but never stan...
#define FF_THREAD_SLICE
Decode more than one part of a single frame at once.
printf("static const uint8_t my_array[100] = {\n")
AVPacket * av_packet_alloc(void)
Allocate an AVPacket and set its fields to default values.
int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt)
Supply raw packet data as input to a decoder.
#define i(width, name, range_min, range_max)
enum AVPixelFormat pix_fmt
Pixel format, see AV_PIX_FMT_xxx.
int main(int argc, char **argv)
these buffered frames must be flushed immediately if a new input produces new the filter must not call request_frame to get more It must just process the frame or queue it The task of requesting more frames is left to the filter s request_frame method or the application If a filter has several the filter must be ready for frames arriving randomly on any input any filter with several inputs will most likely require some kind of queuing mechanism It is perfectly acceptable to have a limited queue and to drop frames when the inputs are too unbalanced request_frame For filters that do not use the this method is called when a frame is wanted on an output For a it should directly call filter_frame on the corresponding output For a if there are queued frames already one of these frames should be pushed If the filter should request a frame on one of its repeatedly until at least one frame has been pushed Return or at least make progress towards producing a frame
#define AV_INPUT_BUFFER_PADDING_SIZE
#define AV_CODEC_FLAG2_CHUNKS
Input bitstream might be truncated at a packet boundaries instead of only at frame boundaries.
main external API structure.
int av_hash_get_size(const AVHashContext *ctx)
void av_hash_final_hex(struct AVHashContext *ctx, uint8_t *dst, int size)
Finalize a hash context and store the hexadecimal representation of the actual hash value as a string...
Descriptor that unambiguously describes how the bits of a pixel are stored in the up to 4 data planes...
This structure stores compressed data.