HEX
Server: LiteSpeed
System: Linux ip-172-31-76-142.ec2.internal 4.14.158-129.185.amzn2.x86_64 #1 SMP Tue Dec 24 03:15:32 UTC 2019 x86_64
User: 69b4844ae61d4e92bf26ad98af552775 (1065)
PHP: 7.2.27
Disabled: exec,passthru,shell_exec,system,eval
Upload Files
File: //lib/python2.7/site-packages/awscli/examples/rekognition/detect-moderation-labels.rst
**To detect unsafe content in an image**

The following ``detect-moderation-labels`` command detects unsafe content in the specified image stored in an Amazon S3 bucket. ::

    aws rekognition detect-moderation-labels \
        --image "S3Object={Bucket=MyImageS3Bucket,Name=gun.jpg}"

Output::

    {
        "ModerationModelVersion": "3.0", 
        "ModerationLabels": [
            {
                "Confidence": 97.29618072509766, 
                "ParentName": "Violence", 
                "Name": "Weapon Violence"
            }, 
            {
                "Confidence": 97.29618072509766, 
                "ParentName": "", 
                "Name": "Violence"
            }
        ]
    }

For more information, see `Detecting Unsafe Images <https://docs.aws.amazon.com/rekognition/latest/dg/procedure-moderate-images.html>`__ in the *Amazon Rekognition Developer Guide*.