What is the correct way to stream custom packets using ffmpeg?
I want to encode frames from camera using NvPipe and stream them via RTP using FFmpeg. My code produces the following error when I want to decode the stream:
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] no frame!
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced 0B f=0/0
Last message repeated 1 times
On another PC, it is even not able to stream, and fails with an segmentation fault on av_interleaved_write_frame(..). How to initialize the AVPacket and its timebase correctly to successfully send and receive the stream using ffplay/VLC/ other software?
My code:
avformat_network_init();
// init encoder
AVPacket *pkt = new AVPacket();
int targetBitrate = 1000000;
int targetFPS = 30;
const uint32_t width = 640;
const uint32_t height = 480;
NvPipe* encoder = NvPipe_CreateEncoder(NVPIPE_BGRA32, NVPIPE_H264, NVPIPE_LOSSY, targetBitrate, targetFPS);
// init stream output
std::string str = "rtp://127.0.0.1:49990";
AVStream* stream = nullptr;
AVOutputFormat *output_format = av_guess_format("rtp", nullptr, nullptr);;
AVFormatContext *output_format_ctx = avformat_alloc_context();
avformat_alloc_output_context2(&output_format_ctx, output_format, output_format->name, str.c_str());
// open output url
if (!(output_format->flags & AVFMT_NOFILE)){
ret = avio_open(&output_format_ctx->pb, str.c_str(), AVIO_FLAG_WRITE);
}
output_format_ctx->oformat = output_format;
output_format->video_codec = AV_CODEC_ID_H264;
stream = avformat_new_stream(output_format_ctx,nullptr);
stream->id = 0;
stream->codecpar->codec_id = AV_CODEC_ID_H264;
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
stream->codecpar->width = width;
stream->codecpar->height = height;
stream->time_base.den = 1;
stream->time_base.num = targetFPS; // 30fps
/* Write the header */
avformat_write_header(output_format_ctx, nullptr); // this seems to destroy the timebase of the stream
std::vector<uint8_t> rgba(width * height * 4);
std::vector<uint8_t> compressed(rgba.size());
int frameCnt = 0;
// encoding and streaming
while (true)
{
frameCnt++;
// Encoding
// Construct dummy frame
for (uint32_t y = 0; y < height; ++y)
for (uint32_t x = 0; x < width; ++x)
rgba[4 * (y * width + x) + 1] = (255.0f * x* y) / (width * height) * (y % 100 < 50);
uint64_t size = NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(), compressed.size(), width, height, false); // last parameter needs to be true for keyframes
av_init_packet(pkt);
pkt->data = compressed.data();
pkt->size = size;
pkt->pts = frameCnt;
if(!memcmp(compressed.data(), "x00x00x00x01x67", 5)) {
pkt->flags |= AV_PKT_FLAG_KEY;
}
//stream
fflush(stdout);
// Write the compressed frame into the output
pkt->pts = av_rescale_q(frameCnt, AVRational {1, targetFPS}, stream->time_base);
pkt->dts = pkt->pts;
pkt->stream_index = stream->index;
/* Write the data on the packet to the output format */
av_interleaved_write_frame(output_format_ctx, pkt);
/* Reset the packet */
av_packet_unref(pkt);
}
The .sdp file to open the stream with ffplay looks like this:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 58.18.101
m=video 49990 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
c++ ffmpeg
add a comment |
I want to encode frames from camera using NvPipe and stream them via RTP using FFmpeg. My code produces the following error when I want to decode the stream:
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] no frame!
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced 0B f=0/0
Last message repeated 1 times
On another PC, it is even not able to stream, and fails with an segmentation fault on av_interleaved_write_frame(..). How to initialize the AVPacket and its timebase correctly to successfully send and receive the stream using ffplay/VLC/ other software?
My code:
avformat_network_init();
// init encoder
AVPacket *pkt = new AVPacket();
int targetBitrate = 1000000;
int targetFPS = 30;
const uint32_t width = 640;
const uint32_t height = 480;
NvPipe* encoder = NvPipe_CreateEncoder(NVPIPE_BGRA32, NVPIPE_H264, NVPIPE_LOSSY, targetBitrate, targetFPS);
// init stream output
std::string str = "rtp://127.0.0.1:49990";
AVStream* stream = nullptr;
AVOutputFormat *output_format = av_guess_format("rtp", nullptr, nullptr);;
AVFormatContext *output_format_ctx = avformat_alloc_context();
avformat_alloc_output_context2(&output_format_ctx, output_format, output_format->name, str.c_str());
// open output url
if (!(output_format->flags & AVFMT_NOFILE)){
ret = avio_open(&output_format_ctx->pb, str.c_str(), AVIO_FLAG_WRITE);
}
output_format_ctx->oformat = output_format;
output_format->video_codec = AV_CODEC_ID_H264;
stream = avformat_new_stream(output_format_ctx,nullptr);
stream->id = 0;
stream->codecpar->codec_id = AV_CODEC_ID_H264;
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
stream->codecpar->width = width;
stream->codecpar->height = height;
stream->time_base.den = 1;
stream->time_base.num = targetFPS; // 30fps
/* Write the header */
avformat_write_header(output_format_ctx, nullptr); // this seems to destroy the timebase of the stream
std::vector<uint8_t> rgba(width * height * 4);
std::vector<uint8_t> compressed(rgba.size());
int frameCnt = 0;
// encoding and streaming
while (true)
{
frameCnt++;
// Encoding
// Construct dummy frame
for (uint32_t y = 0; y < height; ++y)
for (uint32_t x = 0; x < width; ++x)
rgba[4 * (y * width + x) + 1] = (255.0f * x* y) / (width * height) * (y % 100 < 50);
uint64_t size = NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(), compressed.size(), width, height, false); // last parameter needs to be true for keyframes
av_init_packet(pkt);
pkt->data = compressed.data();
pkt->size = size;
pkt->pts = frameCnt;
if(!memcmp(compressed.data(), "x00x00x00x01x67", 5)) {
pkt->flags |= AV_PKT_FLAG_KEY;
}
//stream
fflush(stdout);
// Write the compressed frame into the output
pkt->pts = av_rescale_q(frameCnt, AVRational {1, targetFPS}, stream->time_base);
pkt->dts = pkt->pts;
pkt->stream_index = stream->index;
/* Write the data on the packet to the output format */
av_interleaved_write_frame(output_format_ctx, pkt);
/* Reset the packet */
av_packet_unref(pkt);
}
The .sdp file to open the stream with ffplay looks like this:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 58.18.101
m=video 49990 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
c++ ffmpeg
add a comment |
I want to encode frames from camera using NvPipe and stream them via RTP using FFmpeg. My code produces the following error when I want to decode the stream:
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] no frame!
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced 0B f=0/0
Last message repeated 1 times
On another PC, it is even not able to stream, and fails with an segmentation fault on av_interleaved_write_frame(..). How to initialize the AVPacket and its timebase correctly to successfully send and receive the stream using ffplay/VLC/ other software?
My code:
avformat_network_init();
// init encoder
AVPacket *pkt = new AVPacket();
int targetBitrate = 1000000;
int targetFPS = 30;
const uint32_t width = 640;
const uint32_t height = 480;
NvPipe* encoder = NvPipe_CreateEncoder(NVPIPE_BGRA32, NVPIPE_H264, NVPIPE_LOSSY, targetBitrate, targetFPS);
// init stream output
std::string str = "rtp://127.0.0.1:49990";
AVStream* stream = nullptr;
AVOutputFormat *output_format = av_guess_format("rtp", nullptr, nullptr);;
AVFormatContext *output_format_ctx = avformat_alloc_context();
avformat_alloc_output_context2(&output_format_ctx, output_format, output_format->name, str.c_str());
// open output url
if (!(output_format->flags & AVFMT_NOFILE)){
ret = avio_open(&output_format_ctx->pb, str.c_str(), AVIO_FLAG_WRITE);
}
output_format_ctx->oformat = output_format;
output_format->video_codec = AV_CODEC_ID_H264;
stream = avformat_new_stream(output_format_ctx,nullptr);
stream->id = 0;
stream->codecpar->codec_id = AV_CODEC_ID_H264;
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
stream->codecpar->width = width;
stream->codecpar->height = height;
stream->time_base.den = 1;
stream->time_base.num = targetFPS; // 30fps
/* Write the header */
avformat_write_header(output_format_ctx, nullptr); // this seems to destroy the timebase of the stream
std::vector<uint8_t> rgba(width * height * 4);
std::vector<uint8_t> compressed(rgba.size());
int frameCnt = 0;
// encoding and streaming
while (true)
{
frameCnt++;
// Encoding
// Construct dummy frame
for (uint32_t y = 0; y < height; ++y)
for (uint32_t x = 0; x < width; ++x)
rgba[4 * (y * width + x) + 1] = (255.0f * x* y) / (width * height) * (y % 100 < 50);
uint64_t size = NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(), compressed.size(), width, height, false); // last parameter needs to be true for keyframes
av_init_packet(pkt);
pkt->data = compressed.data();
pkt->size = size;
pkt->pts = frameCnt;
if(!memcmp(compressed.data(), "x00x00x00x01x67", 5)) {
pkt->flags |= AV_PKT_FLAG_KEY;
}
//stream
fflush(stdout);
// Write the compressed frame into the output
pkt->pts = av_rescale_q(frameCnt, AVRational {1, targetFPS}, stream->time_base);
pkt->dts = pkt->pts;
pkt->stream_index = stream->index;
/* Write the data on the packet to the output format */
av_interleaved_write_frame(output_format_ctx, pkt);
/* Reset the packet */
av_packet_unref(pkt);
}
The .sdp file to open the stream with ffplay looks like this:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 58.18.101
m=video 49990 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
c++ ffmpeg
I want to encode frames from camera using NvPipe and stream them via RTP using FFmpeg. My code produces the following error when I want to decode the stream:
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] no frame!
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced 0B f=0/0
Last message repeated 1 times
On another PC, it is even not able to stream, and fails with an segmentation fault on av_interleaved_write_frame(..). How to initialize the AVPacket and its timebase correctly to successfully send and receive the stream using ffplay/VLC/ other software?
My code:
avformat_network_init();
// init encoder
AVPacket *pkt = new AVPacket();
int targetBitrate = 1000000;
int targetFPS = 30;
const uint32_t width = 640;
const uint32_t height = 480;
NvPipe* encoder = NvPipe_CreateEncoder(NVPIPE_BGRA32, NVPIPE_H264, NVPIPE_LOSSY, targetBitrate, targetFPS);
// init stream output
std::string str = "rtp://127.0.0.1:49990";
AVStream* stream = nullptr;
AVOutputFormat *output_format = av_guess_format("rtp", nullptr, nullptr);;
AVFormatContext *output_format_ctx = avformat_alloc_context();
avformat_alloc_output_context2(&output_format_ctx, output_format, output_format->name, str.c_str());
// open output url
if (!(output_format->flags & AVFMT_NOFILE)){
ret = avio_open(&output_format_ctx->pb, str.c_str(), AVIO_FLAG_WRITE);
}
output_format_ctx->oformat = output_format;
output_format->video_codec = AV_CODEC_ID_H264;
stream = avformat_new_stream(output_format_ctx,nullptr);
stream->id = 0;
stream->codecpar->codec_id = AV_CODEC_ID_H264;
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
stream->codecpar->width = width;
stream->codecpar->height = height;
stream->time_base.den = 1;
stream->time_base.num = targetFPS; // 30fps
/* Write the header */
avformat_write_header(output_format_ctx, nullptr); // this seems to destroy the timebase of the stream
std::vector<uint8_t> rgba(width * height * 4);
std::vector<uint8_t> compressed(rgba.size());
int frameCnt = 0;
// encoding and streaming
while (true)
{
frameCnt++;
// Encoding
// Construct dummy frame
for (uint32_t y = 0; y < height; ++y)
for (uint32_t x = 0; x < width; ++x)
rgba[4 * (y * width + x) + 1] = (255.0f * x* y) / (width * height) * (y % 100 < 50);
uint64_t size = NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(), compressed.size(), width, height, false); // last parameter needs to be true for keyframes
av_init_packet(pkt);
pkt->data = compressed.data();
pkt->size = size;
pkt->pts = frameCnt;
if(!memcmp(compressed.data(), "x00x00x00x01x67", 5)) {
pkt->flags |= AV_PKT_FLAG_KEY;
}
//stream
fflush(stdout);
// Write the compressed frame into the output
pkt->pts = av_rescale_q(frameCnt, AVRational {1, targetFPS}, stream->time_base);
pkt->dts = pkt->pts;
pkt->stream_index = stream->index;
/* Write the data on the packet to the output format */
av_interleaved_write_frame(output_format_ctx, pkt);
/* Reset the packet */
av_packet_unref(pkt);
}
The .sdp file to open the stream with ffplay looks like this:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 58.18.101
m=video 49990 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
c++ ffmpeg
c++ ffmpeg
edited Nov 19 '18 at 13:30
Lucker10
asked Nov 19 '18 at 12:51
Lucker10Lucker10
13711
13711
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
The code above sends no keyframes (or I-frames). The (obvious) solution is to send keyframes by turning the last parameter of NvPipe_Encode()
to true. To achieve a certain GOP size gop_size
, do something like
NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(),
compressed.size(), width, height, framecnt % gop_size == 0 ? true : false);
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53375051%2fwhat-is-the-correct-way-to-stream-custom-packets-using-ffmpeg%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The code above sends no keyframes (or I-frames). The (obvious) solution is to send keyframes by turning the last parameter of NvPipe_Encode()
to true. To achieve a certain GOP size gop_size
, do something like
NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(),
compressed.size(), width, height, framecnt % gop_size == 0 ? true : false);
add a comment |
The code above sends no keyframes (or I-frames). The (obvious) solution is to send keyframes by turning the last parameter of NvPipe_Encode()
to true. To achieve a certain GOP size gop_size
, do something like
NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(),
compressed.size(), width, height, framecnt % gop_size == 0 ? true : false);
add a comment |
The code above sends no keyframes (or I-frames). The (obvious) solution is to send keyframes by turning the last parameter of NvPipe_Encode()
to true. To achieve a certain GOP size gop_size
, do something like
NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(),
compressed.size(), width, height, framecnt % gop_size == 0 ? true : false);
The code above sends no keyframes (or I-frames). The (obvious) solution is to send keyframes by turning the last parameter of NvPipe_Encode()
to true. To achieve a certain GOP size gop_size
, do something like
NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(),
compressed.size(), width, height, framecnt % gop_size == 0 ? true : false);
answered Jan 9 at 9:26
Lucker10Lucker10
13711
13711
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53375051%2fwhat-is-the-correct-way-to-stream-custom-packets-using-ffmpeg%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown