{"id":821,"date":"2021-06-23T10:21:47","date_gmt":"2021-06-23T10:21:47","guid":{"rendered":"https:\/\/ods-research.org\/?page_id=821"},"modified":"2021-07-14T18:53:13","modified_gmt":"2021-07-14T18:53:13","slug":"vidcep","status":"publish","type":"page","link":"https:\/\/ods-research.org\/vidcep\/","title":{"rendered":"VIDCEP"},"content":{"rendered":"\n

Query-Aware Adaptive Windowing for Spatiotemporal Complex Video Event Processing for Internet of Multimedia Things<\/strong><\/p>\n\n\n\n

\"\"<\/figure><\/div>\n\n\n\n

With the evolution of the Internet of Things (IoT), there is an exponential rise in sensor devices that are deployed ubiquitously. Due to the extensive usage of IoT applications in smart cities, smart homes, self-driving cars, and social media, there is enormous growth in multimedia data streams like videos and images. We are now transitioning to an era of the Internet of Multimedia Things (IoMT), where unstructured data like videos are continuously streamed from visual sensors like CCTV cameras and smartphones. Video data is highly expressive and has traditionally been very difficult for a machine to interpret. Middleware systems such as Complex Event Processing (CEP) mine patterns from data streams and send notifications to users in a timely fashion. Current CEP systems have inherent limitations to mine event patterns from video streams due to their unstructured data model, lack of expressive query language, and resource intensiveness. This work introduces VidCEP, a data-driven, distributed, on the fly, near real-time complex event matching framework for video streams with five key contributions:<\/p>\n\n\n\n