Trying to align the temperature raw data of MLX90621 16x4 IR sensor array to the forehead of the face












0















I am working on a school project with MLX90621 16x4 IR sensor array and a Raspberry Pi with Python to get the temperature array data and turn it into a thermal colored overlay. This overlay is able to align with a person`s forehead using Haarcascade and it can show the reading of the forehead temperature.



I have managed to do the forehead tracking code with python and get the thermo-data overlaying the camera image with this link:
https://hackaday.io/project/6416-raspberry-pi-thermal-imaging



However, I can only to get it to work separately, I am struggling to get both codes to work together. I have done a lot of research on the subject but I seem to only find Arduino and C++ tutorials or theory related posts.



I am new to programming and raspberry pi.



Here is the code of forehead tracking using Haarcascade and Pi Camera:



from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

# allow the camera to warmup
time.sleep(0.1)

#Use the trained xml classifier for eyes detection in the video
eye_cascade = cv2.CascadeClassifier('/home/pi/Desktop/mlxd-master_terickson/testScripts/haarcascade_eye.xml')
#Use the trained xml classifier for faces detection in the video
face_cascade = cv2.CascadeClassifier('/home/pi/Desktop/mlxd-master_terickson/testScripts/haarcascade_frontalface_default.xml')



# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
font = cv2.FONT_HERSHEY_SIMPLEX

eyes = eye_cascade.detectMultiScale(gray, 1.3, 6) # To detect eyes in the video
for(ex,ey,ew,eh) in eyes:
cv2.rectangle(image, (ex,ey), (ex+ew,ey+eh), (0,255,0),2) # To draw rectangle boxes around the eyes

faces = face_cascade.detectMultiScale(gray, 1.3, 6)
for (x,y,w,h) in faces:
forehead_center = int(x+w/1), int(y+h/4) # Location of the forehead in face
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2) # To draw rectangle boxes around the face
cv2.rectangle(image,(x,y),forehead_center,(255,255,0),2) # To draw rectangle boxes around the forehead of the face
cv2.putText(image,'x: {0} | y: {1}'.format(forehead_center[0], forehead_center[1]),(10,50), font, 1.2,(0,0,255),1) # To put the x and y cord of the forehead

# show the frame
cv2.imshow("Eye, Face and Forehead tracking", image)
key = cv2.waitKey(1) & 0xFF

# clear the stream in preparation for the next frame
rawCapture.truncate(0)

# if the `q` key was pressed, break from the loop
if cv2.waitKey(1) & 0xFF == ord('q'):
break


Here is the code to get the temperature array into an overlay and display on the video preview using Pi Camera:



import picamera
import numpy as np
import subprocess
import skimage
from skimage import io, exposure, transform, img_as_float, img_as_ubyte
import matplotlib.pyplot as plt
from time import sleep
import cv2

# IR registration parameters
ROT = np.deg2rad(90)
SCALE = (36.2, 36.4)
OFFSET = (580, 170)

def getImage():
fn = r'/home/pi/tmp.jpg'
proc = subprocess.Popen('raspistill -o %s -w 640 -h 480 -n -t 3' % (fn),
shell=True, stderr=subprocess.STDOUT)
proc.wait()
im = io.imread(fn, as_grey=True)
im = exposure.equalize_hist(im)
return skimage.img_as_ubyte(im)


def get_overlay(fifo):
# get the whole FIFO
ir_raw = fifo.read()
# trim to 128 bytes
ir_trimmed = ir_raw[0:128]
# go all numpy on it
ir = np.frombuffer(ir_trimmed, np.uint16)
# set the array shape to the sensor shape (16x4)
ir = ir.reshape((16, 4))[::-1, ::-1]
ir = img_as_float(ir)
# stretch contrast on our heat map
p2, p98 = np.percentile(ir, (2, 98))
ir = exposure.rescale_intensity(ir, in_range=(p2, p98))
# increase even further? (optional)
ir = exposure.equalize_hist(ir)

# turn our array into pretty colors
cmap = plt.get_cmap('Spectral')
rgba_img = cmap(ir)
rgb_img = np.delete(rgba_img, 3, 2)

# align the IR array with the camera
tform = transform.AffineTransform(scale=SCALE, rotation=ROT, translation=OFFSET)
ir_aligned = transform.warp(rgb_img, tform.inverse, mode='constant', output_shape=im.shape)
# turn it back into a ubyte so it'll display on the preview overlay
ir_byte = img_as_ubyte(ir_aligned)
# return buffer
return np.getbuffer(ir_byte)

#GaussianBlur = cv2.GaussianBlur(camera, (20,20),0)
#medianFiltered = cv2.medianBlur(camera,5)

im = getImage()


with picamera.PiCamera() as camera:
camera.led = True
camera.resolution = (640, 480)
camera.framerate = 35
camera.start_preview()

# get the temperature array, and align with the image
fifo = open('/var/run/mlx90621.sock', 'r')
o = camera.add_overlay(get_overlay(fifo), layer=3, alpha=90)

# update loop
while True:
sleep(0.001)
o.update(get_overlay(fifo))

print('Error! Closing...')
camera.remove_overlay(o)
fifo.close()









share|improve this question




















  • 1





    Start small. Put only the overlay into your first code. Not reading anything online. Just the opening of the fifo, the get overlay function, and adding.

    – deets
    Nov 16 '18 at 9:13
















0















I am working on a school project with MLX90621 16x4 IR sensor array and a Raspberry Pi with Python to get the temperature array data and turn it into a thermal colored overlay. This overlay is able to align with a person`s forehead using Haarcascade and it can show the reading of the forehead temperature.



I have managed to do the forehead tracking code with python and get the thermo-data overlaying the camera image with this link:
https://hackaday.io/project/6416-raspberry-pi-thermal-imaging



However, I can only to get it to work separately, I am struggling to get both codes to work together. I have done a lot of research on the subject but I seem to only find Arduino and C++ tutorials or theory related posts.



I am new to programming and raspberry pi.



Here is the code of forehead tracking using Haarcascade and Pi Camera:



from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

# allow the camera to warmup
time.sleep(0.1)

#Use the trained xml classifier for eyes detection in the video
eye_cascade = cv2.CascadeClassifier('/home/pi/Desktop/mlxd-master_terickson/testScripts/haarcascade_eye.xml')
#Use the trained xml classifier for faces detection in the video
face_cascade = cv2.CascadeClassifier('/home/pi/Desktop/mlxd-master_terickson/testScripts/haarcascade_frontalface_default.xml')



# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
font = cv2.FONT_HERSHEY_SIMPLEX

eyes = eye_cascade.detectMultiScale(gray, 1.3, 6) # To detect eyes in the video
for(ex,ey,ew,eh) in eyes:
cv2.rectangle(image, (ex,ey), (ex+ew,ey+eh), (0,255,0),2) # To draw rectangle boxes around the eyes

faces = face_cascade.detectMultiScale(gray, 1.3, 6)
for (x,y,w,h) in faces:
forehead_center = int(x+w/1), int(y+h/4) # Location of the forehead in face
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2) # To draw rectangle boxes around the face
cv2.rectangle(image,(x,y),forehead_center,(255,255,0),2) # To draw rectangle boxes around the forehead of the face
cv2.putText(image,'x: {0} | y: {1}'.format(forehead_center[0], forehead_center[1]),(10,50), font, 1.2,(0,0,255),1) # To put the x and y cord of the forehead

# show the frame
cv2.imshow("Eye, Face and Forehead tracking", image)
key = cv2.waitKey(1) & 0xFF

# clear the stream in preparation for the next frame
rawCapture.truncate(0)

# if the `q` key was pressed, break from the loop
if cv2.waitKey(1) & 0xFF == ord('q'):
break


Here is the code to get the temperature array into an overlay and display on the video preview using Pi Camera:



import picamera
import numpy as np
import subprocess
import skimage
from skimage import io, exposure, transform, img_as_float, img_as_ubyte
import matplotlib.pyplot as plt
from time import sleep
import cv2

# IR registration parameters
ROT = np.deg2rad(90)
SCALE = (36.2, 36.4)
OFFSET = (580, 170)

def getImage():
fn = r'/home/pi/tmp.jpg'
proc = subprocess.Popen('raspistill -o %s -w 640 -h 480 -n -t 3' % (fn),
shell=True, stderr=subprocess.STDOUT)
proc.wait()
im = io.imread(fn, as_grey=True)
im = exposure.equalize_hist(im)
return skimage.img_as_ubyte(im)


def get_overlay(fifo):
# get the whole FIFO
ir_raw = fifo.read()
# trim to 128 bytes
ir_trimmed = ir_raw[0:128]
# go all numpy on it
ir = np.frombuffer(ir_trimmed, np.uint16)
# set the array shape to the sensor shape (16x4)
ir = ir.reshape((16, 4))[::-1, ::-1]
ir = img_as_float(ir)
# stretch contrast on our heat map
p2, p98 = np.percentile(ir, (2, 98))
ir = exposure.rescale_intensity(ir, in_range=(p2, p98))
# increase even further? (optional)
ir = exposure.equalize_hist(ir)

# turn our array into pretty colors
cmap = plt.get_cmap('Spectral')
rgba_img = cmap(ir)
rgb_img = np.delete(rgba_img, 3, 2)

# align the IR array with the camera
tform = transform.AffineTransform(scale=SCALE, rotation=ROT, translation=OFFSET)
ir_aligned = transform.warp(rgb_img, tform.inverse, mode='constant', output_shape=im.shape)
# turn it back into a ubyte so it'll display on the preview overlay
ir_byte = img_as_ubyte(ir_aligned)
# return buffer
return np.getbuffer(ir_byte)

#GaussianBlur = cv2.GaussianBlur(camera, (20,20),0)
#medianFiltered = cv2.medianBlur(camera,5)

im = getImage()


with picamera.PiCamera() as camera:
camera.led = True
camera.resolution = (640, 480)
camera.framerate = 35
camera.start_preview()

# get the temperature array, and align with the image
fifo = open('/var/run/mlx90621.sock', 'r')
o = camera.add_overlay(get_overlay(fifo), layer=3, alpha=90)

# update loop
while True:
sleep(0.001)
o.update(get_overlay(fifo))

print('Error! Closing...')
camera.remove_overlay(o)
fifo.close()









share|improve this question




















  • 1





    Start small. Put only the overlay into your first code. Not reading anything online. Just the opening of the fifo, the get overlay function, and adding.

    – deets
    Nov 16 '18 at 9:13














0












0








0








I am working on a school project with MLX90621 16x4 IR sensor array and a Raspberry Pi with Python to get the temperature array data and turn it into a thermal colored overlay. This overlay is able to align with a person`s forehead using Haarcascade and it can show the reading of the forehead temperature.



I have managed to do the forehead tracking code with python and get the thermo-data overlaying the camera image with this link:
https://hackaday.io/project/6416-raspberry-pi-thermal-imaging



However, I can only to get it to work separately, I am struggling to get both codes to work together. I have done a lot of research on the subject but I seem to only find Arduino and C++ tutorials or theory related posts.



I am new to programming and raspberry pi.



Here is the code of forehead tracking using Haarcascade and Pi Camera:



from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

# allow the camera to warmup
time.sleep(0.1)

#Use the trained xml classifier for eyes detection in the video
eye_cascade = cv2.CascadeClassifier('/home/pi/Desktop/mlxd-master_terickson/testScripts/haarcascade_eye.xml')
#Use the trained xml classifier for faces detection in the video
face_cascade = cv2.CascadeClassifier('/home/pi/Desktop/mlxd-master_terickson/testScripts/haarcascade_frontalface_default.xml')



# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
font = cv2.FONT_HERSHEY_SIMPLEX

eyes = eye_cascade.detectMultiScale(gray, 1.3, 6) # To detect eyes in the video
for(ex,ey,ew,eh) in eyes:
cv2.rectangle(image, (ex,ey), (ex+ew,ey+eh), (0,255,0),2) # To draw rectangle boxes around the eyes

faces = face_cascade.detectMultiScale(gray, 1.3, 6)
for (x,y,w,h) in faces:
forehead_center = int(x+w/1), int(y+h/4) # Location of the forehead in face
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2) # To draw rectangle boxes around the face
cv2.rectangle(image,(x,y),forehead_center,(255,255,0),2) # To draw rectangle boxes around the forehead of the face
cv2.putText(image,'x: {0} | y: {1}'.format(forehead_center[0], forehead_center[1]),(10,50), font, 1.2,(0,0,255),1) # To put the x and y cord of the forehead

# show the frame
cv2.imshow("Eye, Face and Forehead tracking", image)
key = cv2.waitKey(1) & 0xFF

# clear the stream in preparation for the next frame
rawCapture.truncate(0)

# if the `q` key was pressed, break from the loop
if cv2.waitKey(1) & 0xFF == ord('q'):
break


Here is the code to get the temperature array into an overlay and display on the video preview using Pi Camera:



import picamera
import numpy as np
import subprocess
import skimage
from skimage import io, exposure, transform, img_as_float, img_as_ubyte
import matplotlib.pyplot as plt
from time import sleep
import cv2

# IR registration parameters
ROT = np.deg2rad(90)
SCALE = (36.2, 36.4)
OFFSET = (580, 170)

def getImage():
fn = r'/home/pi/tmp.jpg'
proc = subprocess.Popen('raspistill -o %s -w 640 -h 480 -n -t 3' % (fn),
shell=True, stderr=subprocess.STDOUT)
proc.wait()
im = io.imread(fn, as_grey=True)
im = exposure.equalize_hist(im)
return skimage.img_as_ubyte(im)


def get_overlay(fifo):
# get the whole FIFO
ir_raw = fifo.read()
# trim to 128 bytes
ir_trimmed = ir_raw[0:128]
# go all numpy on it
ir = np.frombuffer(ir_trimmed, np.uint16)
# set the array shape to the sensor shape (16x4)
ir = ir.reshape((16, 4))[::-1, ::-1]
ir = img_as_float(ir)
# stretch contrast on our heat map
p2, p98 = np.percentile(ir, (2, 98))
ir = exposure.rescale_intensity(ir, in_range=(p2, p98))
# increase even further? (optional)
ir = exposure.equalize_hist(ir)

# turn our array into pretty colors
cmap = plt.get_cmap('Spectral')
rgba_img = cmap(ir)
rgb_img = np.delete(rgba_img, 3, 2)

# align the IR array with the camera
tform = transform.AffineTransform(scale=SCALE, rotation=ROT, translation=OFFSET)
ir_aligned = transform.warp(rgb_img, tform.inverse, mode='constant', output_shape=im.shape)
# turn it back into a ubyte so it'll display on the preview overlay
ir_byte = img_as_ubyte(ir_aligned)
# return buffer
return np.getbuffer(ir_byte)

#GaussianBlur = cv2.GaussianBlur(camera, (20,20),0)
#medianFiltered = cv2.medianBlur(camera,5)

im = getImage()


with picamera.PiCamera() as camera:
camera.led = True
camera.resolution = (640, 480)
camera.framerate = 35
camera.start_preview()

# get the temperature array, and align with the image
fifo = open('/var/run/mlx90621.sock', 'r')
o = camera.add_overlay(get_overlay(fifo), layer=3, alpha=90)

# update loop
while True:
sleep(0.001)
o.update(get_overlay(fifo))

print('Error! Closing...')
camera.remove_overlay(o)
fifo.close()









share|improve this question
















I am working on a school project with MLX90621 16x4 IR sensor array and a Raspberry Pi with Python to get the temperature array data and turn it into a thermal colored overlay. This overlay is able to align with a person`s forehead using Haarcascade and it can show the reading of the forehead temperature.



I have managed to do the forehead tracking code with python and get the thermo-data overlaying the camera image with this link:
https://hackaday.io/project/6416-raspberry-pi-thermal-imaging



However, I can only to get it to work separately, I am struggling to get both codes to work together. I have done a lot of research on the subject but I seem to only find Arduino and C++ tutorials or theory related posts.



I am new to programming and raspberry pi.



Here is the code of forehead tracking using Haarcascade and Pi Camera:



from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

# allow the camera to warmup
time.sleep(0.1)

#Use the trained xml classifier for eyes detection in the video
eye_cascade = cv2.CascadeClassifier('/home/pi/Desktop/mlxd-master_terickson/testScripts/haarcascade_eye.xml')
#Use the trained xml classifier for faces detection in the video
face_cascade = cv2.CascadeClassifier('/home/pi/Desktop/mlxd-master_terickson/testScripts/haarcascade_frontalface_default.xml')



# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
font = cv2.FONT_HERSHEY_SIMPLEX

eyes = eye_cascade.detectMultiScale(gray, 1.3, 6) # To detect eyes in the video
for(ex,ey,ew,eh) in eyes:
cv2.rectangle(image, (ex,ey), (ex+ew,ey+eh), (0,255,0),2) # To draw rectangle boxes around the eyes

faces = face_cascade.detectMultiScale(gray, 1.3, 6)
for (x,y,w,h) in faces:
forehead_center = int(x+w/1), int(y+h/4) # Location of the forehead in face
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2) # To draw rectangle boxes around the face
cv2.rectangle(image,(x,y),forehead_center,(255,255,0),2) # To draw rectangle boxes around the forehead of the face
cv2.putText(image,'x: {0} | y: {1}'.format(forehead_center[0], forehead_center[1]),(10,50), font, 1.2,(0,0,255),1) # To put the x and y cord of the forehead

# show the frame
cv2.imshow("Eye, Face and Forehead tracking", image)
key = cv2.waitKey(1) & 0xFF

# clear the stream in preparation for the next frame
rawCapture.truncate(0)

# if the `q` key was pressed, break from the loop
if cv2.waitKey(1) & 0xFF == ord('q'):
break


Here is the code to get the temperature array into an overlay and display on the video preview using Pi Camera:



import picamera
import numpy as np
import subprocess
import skimage
from skimage import io, exposure, transform, img_as_float, img_as_ubyte
import matplotlib.pyplot as plt
from time import sleep
import cv2

# IR registration parameters
ROT = np.deg2rad(90)
SCALE = (36.2, 36.4)
OFFSET = (580, 170)

def getImage():
fn = r'/home/pi/tmp.jpg'
proc = subprocess.Popen('raspistill -o %s -w 640 -h 480 -n -t 3' % (fn),
shell=True, stderr=subprocess.STDOUT)
proc.wait()
im = io.imread(fn, as_grey=True)
im = exposure.equalize_hist(im)
return skimage.img_as_ubyte(im)


def get_overlay(fifo):
# get the whole FIFO
ir_raw = fifo.read()
# trim to 128 bytes
ir_trimmed = ir_raw[0:128]
# go all numpy on it
ir = np.frombuffer(ir_trimmed, np.uint16)
# set the array shape to the sensor shape (16x4)
ir = ir.reshape((16, 4))[::-1, ::-1]
ir = img_as_float(ir)
# stretch contrast on our heat map
p2, p98 = np.percentile(ir, (2, 98))
ir = exposure.rescale_intensity(ir, in_range=(p2, p98))
# increase even further? (optional)
ir = exposure.equalize_hist(ir)

# turn our array into pretty colors
cmap = plt.get_cmap('Spectral')
rgba_img = cmap(ir)
rgb_img = np.delete(rgba_img, 3, 2)

# align the IR array with the camera
tform = transform.AffineTransform(scale=SCALE, rotation=ROT, translation=OFFSET)
ir_aligned = transform.warp(rgb_img, tform.inverse, mode='constant', output_shape=im.shape)
# turn it back into a ubyte so it'll display on the preview overlay
ir_byte = img_as_ubyte(ir_aligned)
# return buffer
return np.getbuffer(ir_byte)

#GaussianBlur = cv2.GaussianBlur(camera, (20,20),0)
#medianFiltered = cv2.medianBlur(camera,5)

im = getImage()


with picamera.PiCamera() as camera:
camera.led = True
camera.resolution = (640, 480)
camera.framerate = 35
camera.start_preview()

# get the temperature array, and align with the image
fifo = open('/var/run/mlx90621.sock', 'r')
o = camera.add_overlay(get_overlay(fifo), layer=3, alpha=90)

# update loop
while True:
sleep(0.001)
o.update(get_overlay(fifo))

print('Error! Closing...')
camera.remove_overlay(o)
fifo.close()






python opencv image-processing raspberry-pi






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 '18 at 8:52







Wendy

















asked Nov 16 '18 at 8:40









WendyWendy

11




11








  • 1





    Start small. Put only the overlay into your first code. Not reading anything online. Just the opening of the fifo, the get overlay function, and adding.

    – deets
    Nov 16 '18 at 9:13














  • 1





    Start small. Put only the overlay into your first code. Not reading anything online. Just the opening of the fifo, the get overlay function, and adding.

    – deets
    Nov 16 '18 at 9:13








1




1





Start small. Put only the overlay into your first code. Not reading anything online. Just the opening of the fifo, the get overlay function, and adding.

– deets
Nov 16 '18 at 9:13





Start small. Put only the overlay into your first code. Not reading anything online. Just the opening of the fifo, the get overlay function, and adding.

– deets
Nov 16 '18 at 9:13












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53334174%2ftrying-to-align-the-temperature-raw-data-of-mlx90621-16x4-ir-sensor-array-to-the%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53334174%2ftrying-to-align-the-temperature-raw-data-of-mlx90621-16x4-ir-sensor-array-to-the%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

鏡平學校

ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

Why https connections are so slow when debugging (stepping over) in Java?