Three years ago, Amazon Web Services (AWS) launched Rekognition, a video scanning service designed to recognize key elements within a video frame. If IMDB ever helped you recall which actor starred in what movie, then think of Rekognition as the ultimate video search and analysis tool. It detects objects, motion (including pathing), faces (with emotional interpretation) and text within a scene. It even has filters to identify celebrities and unsafe content.
Rekognition is a bigger deal than most of us know simply because the entire web including apps rely heavily upon text-based data. And video is not text-based. To get from now-to-the-future, video must be scanned and interpreted for recognizable features like mountain, car or building and then refined into Mount Kilimanjaro, Audi A8 or Osterlin Library. And remember, video is comprised of hundreds to millions of frames, depending on content length.
As a primer to AWS services, think of them like LEGOs: they stack. Multiple services can be connected by customized workflows. Now imagine a news feed or last night’s game being fed to Rekognition. Want to see key moments from a favorite player or team? No problem. Swipe or scroll through highlights to find what you’re looking for.
And Rekognition is being used for so much more than this. When combined with artificial intelligence (AI) and machine learning (ML), Rekognition is able to build its own database of objects which can then be rendered into multiple languages. So while you’re fast asleep tonight, AWS will be churning out petabytes of object tags for your future use and convenience.