Trending: Testing Microsoft’s Project xCloud: New streaming service feels like a magic trick
Amazon Rekognition can now identify the location of objects in an image, and determine relationships between those objects. (AWS Photo / @jeffbarr)

Amazon Web Services released new capabilities for its Rekognition image and video analysis service Friday that will allow companies trying to sort through massive numbers of images to zero in on the part of the image that has the desired object.

At this point, Rekognition is best known for setting off alarm bells among civil liberties advocates after police departments began experimenting with its facial-recognition capabilities. But other customers of the service are using it to find objects in an image, like consumer goods or offensive content, to improve search results. Rekognition was previously able to identify those objects in photos, but now it can tell users exactly where those objects can be found.

The new capabilities will also allow customers to detect the number of times a given object appears in an image, as well as the relationship between two objects in an image, such as “dog on couch,” AWS said in a blog post.

AWS added a viewer for “bounding boxes” around objects identified by Rekognition in order to make this possible. When computers are trying to find something in an image, they break that image down into blocks of defined areas, and more sophisticated machine-learning algorithms can be trained to draw precise boxes around objects in an image but don’t always surface that information to the end user.

Given that AWS has spent a fair amount of time this year defending Rekognition, expect to hear a lot more about its non-surveillance capabilities at AWS re:Invent 2018, coming up at the end of November. AWS introduced Rekognition’s video-analysis features last year during the event.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Comments

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.