RuntimeError: Unsupported image type, must be 8bit gray or RGB image when trying to parse frames from IP...












0















I'm trying to get video stream from camera and parse them. Here's my code:



import datetime
import os

import cv2
import face_recognition
from PIL import Image

import config
from helpers import get_known_faces, add_to_json, add_captured_face

print('Running...')
cap_vid = cv2.VideoCapture(config.stream_url)
while True:
ret, frame = cap_vid.read()
curr_face_encs = face_recognition.face_encodings(frame)
curr_face_locs = face_recognition.face_locations(frame)
face_names_in_DB, face_encs_in_DB = get_known_faces()
if curr_face_encs:
for idx, face in enumerate(curr_face_encs):
res = face_recognition.compare_faces(face_encs_in_DB, face)
if True in res:
frame_data = {
'full_name': face_names_in_DB[res.index(True)],
'time': datetime.datetime.now().isoformat()
}
add_to_json(config.members_json_data, frame_data)
else:
top, right, bottom, left = curr_face_locs[idx]
img = Image.fromarray(frame[top:bottom, left:right])
img_name = '%s-%d.jpg' % (
datetime.datetime.now().isoformat().replace(':', 'c'), idx)
img_path = os.path.join(config.stranger_dir, img_name)
img.resize((40, 40)).save(img_path)
add_captured_face(img_path,
datetime.datetime.now().isoformat())


When the config.stream_url is set to 0, i.e., when I'm reading from webcam, everything works perfectly. But when config.stream_url = rtsp://10.10.111.200/profile2/media.smp( an IP camera that I have full administrative access to and I can watch live video, say, via VLC player - what I'm trying to say is that url is working fine and I have access to it), it gives me the following error:



Traceback (most recent call last):
File "C:/Users/turalm01/Desktop/work/ITSFaceDetector/main.py", line 25, in <module>
curr_face_encs = face_recognition.face_encodings(frame)
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 209, in face_encodings
raw_landmarks = _raw_face_landmarks(face_image, known_face_locations, model="small")
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 153, in _raw_face_landmarks
face_locations = _raw_face_locations(face_image)
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 102, in _raw_face_locations
return face_detector(img, number_of_times_to_upsample)
RuntimeError: Unsupported image type, must be 8bit gray or RGB image.


I thought perhaps it wants me to convert each frame to RGB, so I tried converting each frame to RGB (but I'm not sure if this is how to do it), such as following:



import datetime
import os

import cv2
import face_recognition
from PIL import Image

import config
from helpers import get_known_faces, add_to_json, add_captured_face

print('Running...')
cap_vid = cv2.VideoCapture(config.stream_url)
while True:
ret, frame = cap_vid.read()
rgb_frame = frame[:, :, ::-1]
curr_face_encs = face_recognition.face_encodings(rgb_frame)
curr_face_locs = face_recognition.face_locations(rgb_frame)
face_names_in_DB, face_encs_in_DB = get_known_faces()
if curr_face_encs:
for idx, face in enumerate(curr_face_encs):
res = face_recognition.compare_faces(face_encs_in_DB, face)
if True in res:
frame_data = {
'full_name': face_names_in_DB[res.index(True)],
'time': datetime.datetime.now().isoformat()
}
add_to_json(config.members_json_data, frame_data)
else:
top, right, bottom, left = curr_face_locs[idx]
img = Image.fromarray(rgb_frame[top:bottom, left:right])
img_name = '%s-%d.jpg' % (
datetime.datetime.now().isoformat().replace(':', 'c'), idx)
img_path = os.path.join(config.stranger_dir, img_name)
img.resize((40, 40)).save(img_path)
add_captured_face(img_path,
datetime.datetime.now().isoformat())


and this time it gave me the following error:



Traceback (most recent call last):
File "C:/Users/turalm01/Desktop/work/ITSFaceDetector/main.py", line 25, in <module>
rgb_frame = frame[:, :, ::-1]
TypeError: 'NoneType' object is not subscriptable


Note: I want to mention a detail that perhaps you guys would like to know. In another Python module I'm getting live video stream from the same IP camera successfully but with a small bug, which is that the video that I'm receiving is the top right quarter of the whole frame.










share|improve this question

























  • Please read Under what circumstances may I add “urgent” or other similar phrases to my question, in order to obtain faster answers? - the summary is that this is not an ideal way to address volunteers, and is probably counterproductive to obtaining answers. Please refrain from adding this to your questions.

    – halfer
    Nov 16 '18 at 22:20
















0















I'm trying to get video stream from camera and parse them. Here's my code:



import datetime
import os

import cv2
import face_recognition
from PIL import Image

import config
from helpers import get_known_faces, add_to_json, add_captured_face

print('Running...')
cap_vid = cv2.VideoCapture(config.stream_url)
while True:
ret, frame = cap_vid.read()
curr_face_encs = face_recognition.face_encodings(frame)
curr_face_locs = face_recognition.face_locations(frame)
face_names_in_DB, face_encs_in_DB = get_known_faces()
if curr_face_encs:
for idx, face in enumerate(curr_face_encs):
res = face_recognition.compare_faces(face_encs_in_DB, face)
if True in res:
frame_data = {
'full_name': face_names_in_DB[res.index(True)],
'time': datetime.datetime.now().isoformat()
}
add_to_json(config.members_json_data, frame_data)
else:
top, right, bottom, left = curr_face_locs[idx]
img = Image.fromarray(frame[top:bottom, left:right])
img_name = '%s-%d.jpg' % (
datetime.datetime.now().isoformat().replace(':', 'c'), idx)
img_path = os.path.join(config.stranger_dir, img_name)
img.resize((40, 40)).save(img_path)
add_captured_face(img_path,
datetime.datetime.now().isoformat())


When the config.stream_url is set to 0, i.e., when I'm reading from webcam, everything works perfectly. But when config.stream_url = rtsp://10.10.111.200/profile2/media.smp( an IP camera that I have full administrative access to and I can watch live video, say, via VLC player - what I'm trying to say is that url is working fine and I have access to it), it gives me the following error:



Traceback (most recent call last):
File "C:/Users/turalm01/Desktop/work/ITSFaceDetector/main.py", line 25, in <module>
curr_face_encs = face_recognition.face_encodings(frame)
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 209, in face_encodings
raw_landmarks = _raw_face_landmarks(face_image, known_face_locations, model="small")
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 153, in _raw_face_landmarks
face_locations = _raw_face_locations(face_image)
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 102, in _raw_face_locations
return face_detector(img, number_of_times_to_upsample)
RuntimeError: Unsupported image type, must be 8bit gray or RGB image.


I thought perhaps it wants me to convert each frame to RGB, so I tried converting each frame to RGB (but I'm not sure if this is how to do it), such as following:



import datetime
import os

import cv2
import face_recognition
from PIL import Image

import config
from helpers import get_known_faces, add_to_json, add_captured_face

print('Running...')
cap_vid = cv2.VideoCapture(config.stream_url)
while True:
ret, frame = cap_vid.read()
rgb_frame = frame[:, :, ::-1]
curr_face_encs = face_recognition.face_encodings(rgb_frame)
curr_face_locs = face_recognition.face_locations(rgb_frame)
face_names_in_DB, face_encs_in_DB = get_known_faces()
if curr_face_encs:
for idx, face in enumerate(curr_face_encs):
res = face_recognition.compare_faces(face_encs_in_DB, face)
if True in res:
frame_data = {
'full_name': face_names_in_DB[res.index(True)],
'time': datetime.datetime.now().isoformat()
}
add_to_json(config.members_json_data, frame_data)
else:
top, right, bottom, left = curr_face_locs[idx]
img = Image.fromarray(rgb_frame[top:bottom, left:right])
img_name = '%s-%d.jpg' % (
datetime.datetime.now().isoformat().replace(':', 'c'), idx)
img_path = os.path.join(config.stranger_dir, img_name)
img.resize((40, 40)).save(img_path)
add_captured_face(img_path,
datetime.datetime.now().isoformat())


and this time it gave me the following error:



Traceback (most recent call last):
File "C:/Users/turalm01/Desktop/work/ITSFaceDetector/main.py", line 25, in <module>
rgb_frame = frame[:, :, ::-1]
TypeError: 'NoneType' object is not subscriptable


Note: I want to mention a detail that perhaps you guys would like to know. In another Python module I'm getting live video stream from the same IP camera successfully but with a small bug, which is that the video that I'm receiving is the top right quarter of the whole frame.










share|improve this question

























  • Please read Under what circumstances may I add “urgent” or other similar phrases to my question, in order to obtain faster answers? - the summary is that this is not an ideal way to address volunteers, and is probably counterproductive to obtaining answers. Please refrain from adding this to your questions.

    – halfer
    Nov 16 '18 at 22:20














0












0








0








I'm trying to get video stream from camera and parse them. Here's my code:



import datetime
import os

import cv2
import face_recognition
from PIL import Image

import config
from helpers import get_known_faces, add_to_json, add_captured_face

print('Running...')
cap_vid = cv2.VideoCapture(config.stream_url)
while True:
ret, frame = cap_vid.read()
curr_face_encs = face_recognition.face_encodings(frame)
curr_face_locs = face_recognition.face_locations(frame)
face_names_in_DB, face_encs_in_DB = get_known_faces()
if curr_face_encs:
for idx, face in enumerate(curr_face_encs):
res = face_recognition.compare_faces(face_encs_in_DB, face)
if True in res:
frame_data = {
'full_name': face_names_in_DB[res.index(True)],
'time': datetime.datetime.now().isoformat()
}
add_to_json(config.members_json_data, frame_data)
else:
top, right, bottom, left = curr_face_locs[idx]
img = Image.fromarray(frame[top:bottom, left:right])
img_name = '%s-%d.jpg' % (
datetime.datetime.now().isoformat().replace(':', 'c'), idx)
img_path = os.path.join(config.stranger_dir, img_name)
img.resize((40, 40)).save(img_path)
add_captured_face(img_path,
datetime.datetime.now().isoformat())


When the config.stream_url is set to 0, i.e., when I'm reading from webcam, everything works perfectly. But when config.stream_url = rtsp://10.10.111.200/profile2/media.smp( an IP camera that I have full administrative access to and I can watch live video, say, via VLC player - what I'm trying to say is that url is working fine and I have access to it), it gives me the following error:



Traceback (most recent call last):
File "C:/Users/turalm01/Desktop/work/ITSFaceDetector/main.py", line 25, in <module>
curr_face_encs = face_recognition.face_encodings(frame)
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 209, in face_encodings
raw_landmarks = _raw_face_landmarks(face_image, known_face_locations, model="small")
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 153, in _raw_face_landmarks
face_locations = _raw_face_locations(face_image)
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 102, in _raw_face_locations
return face_detector(img, number_of_times_to_upsample)
RuntimeError: Unsupported image type, must be 8bit gray or RGB image.


I thought perhaps it wants me to convert each frame to RGB, so I tried converting each frame to RGB (but I'm not sure if this is how to do it), such as following:



import datetime
import os

import cv2
import face_recognition
from PIL import Image

import config
from helpers import get_known_faces, add_to_json, add_captured_face

print('Running...')
cap_vid = cv2.VideoCapture(config.stream_url)
while True:
ret, frame = cap_vid.read()
rgb_frame = frame[:, :, ::-1]
curr_face_encs = face_recognition.face_encodings(rgb_frame)
curr_face_locs = face_recognition.face_locations(rgb_frame)
face_names_in_DB, face_encs_in_DB = get_known_faces()
if curr_face_encs:
for idx, face in enumerate(curr_face_encs):
res = face_recognition.compare_faces(face_encs_in_DB, face)
if True in res:
frame_data = {
'full_name': face_names_in_DB[res.index(True)],
'time': datetime.datetime.now().isoformat()
}
add_to_json(config.members_json_data, frame_data)
else:
top, right, bottom, left = curr_face_locs[idx]
img = Image.fromarray(rgb_frame[top:bottom, left:right])
img_name = '%s-%d.jpg' % (
datetime.datetime.now().isoformat().replace(':', 'c'), idx)
img_path = os.path.join(config.stranger_dir, img_name)
img.resize((40, 40)).save(img_path)
add_captured_face(img_path,
datetime.datetime.now().isoformat())


and this time it gave me the following error:



Traceback (most recent call last):
File "C:/Users/turalm01/Desktop/work/ITSFaceDetector/main.py", line 25, in <module>
rgb_frame = frame[:, :, ::-1]
TypeError: 'NoneType' object is not subscriptable


Note: I want to mention a detail that perhaps you guys would like to know. In another Python module I'm getting live video stream from the same IP camera successfully but with a small bug, which is that the video that I'm receiving is the top right quarter of the whole frame.










share|improve this question
















I'm trying to get video stream from camera and parse them. Here's my code:



import datetime
import os

import cv2
import face_recognition
from PIL import Image

import config
from helpers import get_known_faces, add_to_json, add_captured_face

print('Running...')
cap_vid = cv2.VideoCapture(config.stream_url)
while True:
ret, frame = cap_vid.read()
curr_face_encs = face_recognition.face_encodings(frame)
curr_face_locs = face_recognition.face_locations(frame)
face_names_in_DB, face_encs_in_DB = get_known_faces()
if curr_face_encs:
for idx, face in enumerate(curr_face_encs):
res = face_recognition.compare_faces(face_encs_in_DB, face)
if True in res:
frame_data = {
'full_name': face_names_in_DB[res.index(True)],
'time': datetime.datetime.now().isoformat()
}
add_to_json(config.members_json_data, frame_data)
else:
top, right, bottom, left = curr_face_locs[idx]
img = Image.fromarray(frame[top:bottom, left:right])
img_name = '%s-%d.jpg' % (
datetime.datetime.now().isoformat().replace(':', 'c'), idx)
img_path = os.path.join(config.stranger_dir, img_name)
img.resize((40, 40)).save(img_path)
add_captured_face(img_path,
datetime.datetime.now().isoformat())


When the config.stream_url is set to 0, i.e., when I'm reading from webcam, everything works perfectly. But when config.stream_url = rtsp://10.10.111.200/profile2/media.smp( an IP camera that I have full administrative access to and I can watch live video, say, via VLC player - what I'm trying to say is that url is working fine and I have access to it), it gives me the following error:



Traceback (most recent call last):
File "C:/Users/turalm01/Desktop/work/ITSFaceDetector/main.py", line 25, in <module>
curr_face_encs = face_recognition.face_encodings(frame)
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 209, in face_encodings
raw_landmarks = _raw_face_landmarks(face_image, known_face_locations, model="small")
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 153, in _raw_face_landmarks
face_locations = _raw_face_locations(face_image)
File "C:Usersturalm01AppDataLocalProgramsPythonPython36libsite-packagesface_recognitionapi.py", line 102, in _raw_face_locations
return face_detector(img, number_of_times_to_upsample)
RuntimeError: Unsupported image type, must be 8bit gray or RGB image.


I thought perhaps it wants me to convert each frame to RGB, so I tried converting each frame to RGB (but I'm not sure if this is how to do it), such as following:



import datetime
import os

import cv2
import face_recognition
from PIL import Image

import config
from helpers import get_known_faces, add_to_json, add_captured_face

print('Running...')
cap_vid = cv2.VideoCapture(config.stream_url)
while True:
ret, frame = cap_vid.read()
rgb_frame = frame[:, :, ::-1]
curr_face_encs = face_recognition.face_encodings(rgb_frame)
curr_face_locs = face_recognition.face_locations(rgb_frame)
face_names_in_DB, face_encs_in_DB = get_known_faces()
if curr_face_encs:
for idx, face in enumerate(curr_face_encs):
res = face_recognition.compare_faces(face_encs_in_DB, face)
if True in res:
frame_data = {
'full_name': face_names_in_DB[res.index(True)],
'time': datetime.datetime.now().isoformat()
}
add_to_json(config.members_json_data, frame_data)
else:
top, right, bottom, left = curr_face_locs[idx]
img = Image.fromarray(rgb_frame[top:bottom, left:right])
img_name = '%s-%d.jpg' % (
datetime.datetime.now().isoformat().replace(':', 'c'), idx)
img_path = os.path.join(config.stranger_dir, img_name)
img.resize((40, 40)).save(img_path)
add_captured_face(img_path,
datetime.datetime.now().isoformat())


and this time it gave me the following error:



Traceback (most recent call last):
File "C:/Users/turalm01/Desktop/work/ITSFaceDetector/main.py", line 25, in <module>
rgb_frame = frame[:, :, ::-1]
TypeError: 'NoneType' object is not subscriptable


Note: I want to mention a detail that perhaps you guys would like to know. In another Python module I'm getting live video stream from the same IP camera successfully but with a small bug, which is that the video that I'm receiving is the top right quarter of the whole frame.







python-3.x face-recognition






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 '18 at 22:20









halfer

14.4k758109




14.4k758109










asked Nov 16 '18 at 7:22









NB1991NB1991

184




184













  • Please read Under what circumstances may I add “urgent” or other similar phrases to my question, in order to obtain faster answers? - the summary is that this is not an ideal way to address volunteers, and is probably counterproductive to obtaining answers. Please refrain from adding this to your questions.

    – halfer
    Nov 16 '18 at 22:20



















  • Please read Under what circumstances may I add “urgent” or other similar phrases to my question, in order to obtain faster answers? - the summary is that this is not an ideal way to address volunteers, and is probably counterproductive to obtaining answers. Please refrain from adding this to your questions.

    – halfer
    Nov 16 '18 at 22:20

















Please read Under what circumstances may I add “urgent” or other similar phrases to my question, in order to obtain faster answers? - the summary is that this is not an ideal way to address volunteers, and is probably counterproductive to obtaining answers. Please refrain from adding this to your questions.

– halfer
Nov 16 '18 at 22:20





Please read Under what circumstances may I add “urgent” or other similar phrases to my question, in order to obtain faster answers? - the summary is that this is not an ideal way to address volunteers, and is probably counterproductive to obtaining answers. Please refrain from adding this to your questions.

– halfer
Nov 16 '18 at 22:20












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333192%2fruntimeerror-unsupported-image-type-must-be-8bit-gray-or-rgb-image-when-trying%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333192%2fruntimeerror-unsupported-image-type-must-be-8bit-gray-or-rgb-image-when-trying%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Guess what letter conforming each word

Port of Spain

Run scheduled task as local user group (not BUILTIN)