Uses background subtraction from an image buffer to detect areas of
motion in a video.
The general process is to update the image buffer and then call the
MotionDetector's detect() method.
|
|
__init__(self,
imageBuff=None,
thresh=20,
method='BG_SUBTRACT_AMF',
minArea=400,
rectFilter=None,
buffSize=5,
soft_thresh=False,
rect_type='BOUNDING_RECTS',
rect_sigma=2.0,
smooth=False)
Constructor |
source code
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
detect(self,
img,
ConvexHulls=False)
You call this method to update detection results, given the new image
in the stream. |
source code
|
|
|
|
|
|
|
getForegroundMask(self)
Returns:
a binary pv.Image representing the foreground pixels as determined by
the selected background subtraction method. |
source code
|
|
|
|
|
|
|
getForegroundPixels(self,
bgcolor=None)
Returns:
The full color foreground pixels on either a blank (black)
background, or on a background color specified by the user. |
source code
|
|
|
|
getRects(self)
Returns:
the bounding boxes of the external contours of the foreground mask. |
source code
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
getAnnotatedImage(self,
showRects=True,
showContours=False,
showConvexHulls=False,
showFlow=False)
Returns:
the annotation image with selected objects drawn upon it. |
source code
|
|
|
|
annotateFrame(self,
key_frame,
rect_color='yellow',
contour_color='#00FF00',
flow_color='white')
Draws detection results on an image (key_frame) specified by the
user. |
source code
|
|
|
|
getForegroundTiles(self,
bgcolor=None)
Returns:
a list of "tiles", where each tile is a small pv.Image
representing the clipped area of the annotationImg based on the
motion detection. |
source code
|
|
|
Inherited from object:
__delattr__,
__format__,
__getattribute__,
__hash__,
__new__,
__reduce__,
__reduce_ex__,
__repr__,
__setattr__,
__sizeof__,
__str__,
__subclasshook__
|