You set up AWS Config Aggregators expecting centralized visibility across your organization — including deleted resources. Then you try to query for recently deleted S3 buckets or EC2 instances and get nothing back. Not a permissions error. Just nothing.
The limitation is by design, and it’s not documented prominently enough: Config Aggregators only aggregate currently existing resources. When a resource is deleted in a member account, Config marks it ResourceDeleted in that account’s history — and removes it from the aggregator’s centralized view entirely.
No cross-account deletion history. No centralized deletion timeline. SQL queries against aggregators return zero rows for deleted resources, even when you know deletions happened. This is not a bug — it's how aggregators work.
What aggregators actually give you
Config Aggregators are useful for:
- Current resource inventory — active resources across all accounts and regions
- Live compliance status — real-time config rule evaluations org-wide
- Active resource queries — advanced SQL over existing resources
What they cannot do:
- Show deleted resources in centralized queries
- Provide deletion timelines across accounts
- Support historical compliance queries that reference removed resources
The SQL dead end
This query will always return zero rows from an aggregator, even when resources were deleted yesterday:
-- This will NOT work in Config Aggregator advanced queries
SELECT
resourceId,
resourceType,
accountId,
awsRegion,
configurationItemCaptureTime
WHERE
configurationItemStatus = 'ResourceDeleted'
AND configurationItemCaptureTime > '2024-12-17T00:00:00.000Z'
The same query works fine when run against an individual account’s Config service. But there is no equivalent for aggregators.
Querying deleted resources per account
To find deleted resources, you must query each account individually using get_resource_config_history. That API does return ResourceDeleted records — it just can’t be driven through an aggregator.
import boto3
from datetime import datetime, timedelta
def find_deleted_resources_per_account(account_ids, regions, resource_type, days_back=30):
"""
Find deleted resources by querying each account individually.
This cannot use Config Aggregators — must query each account directly.
"""
deleted_resources = []
for account_id in account_ids:
for region in regions:
try:
sts = boto3.client('sts')
role_arn = f"arn:aws:iam::{account_id}:role/ConfigQueryRole"
assumed_role = sts.assume_role(
RoleArn=role_arn,
RoleSessionName='DeletedResourceQuery'
)
config_client = boto3.client(
'config',
region_name=region,
aws_access_key_id=assumed_role['Credentials']['AccessKeyId'],
aws_secret_access_key=assumed_role['Credentials']['SecretAccessKey'],
aws_session_token=assumed_role['Credentials']['SessionToken']
)
end_time = datetime.utcnow()
start_time = end_time - timedelta(days=days_back)
response = config_client.get_resource_config_history(
resourceType=resource_type,
laterTime=start_time,
earlierTime=end_time
)
for item in response['configurationItems']:
if item['configurationItemStatus'] == 'ResourceDeleted':
deleted_resources.append({
'ResourceId': item['resourceId'],
'ResourceType': resource_type,
'AccountId': account_id,
'Region': region,
'DeletionTime': item['configurationItemCaptureTime'],
'ResourceName': item.get('resourceName', 'N/A')
})
except Exception as e:
print(f"Error querying account {account_id} in {region}: {str(e)}")
continue
return deleted_resources
Three alternatives that actually work
CloudTrail captures every deletion API call. If you have CloudTrail org trails, deletion events are there — you just need to query them. lookup_events with the deletion event name gives you who deleted what and when, across accounts, without needing to touch individual Config services.
def find_deletions_via_cloudtrail(account_ids, start_time, end_time):
deletion_events = []
for account_id in account_ids:
cloudtrail = boto3.client('cloudtrail')
response = cloudtrail.lookup_events(
LookupAttributes=[
{
'AttributeKey': 'EventName',
'AttributeValue': 'DeleteBucket'
}
],
StartTime=start_time,
EndTime=end_time
)
for event in response['Events']:
deletion_events.append({
'EventName': event['EventName'],
'EventTime': event['EventTime'],
'Username': event['Username'],
'Resources': event.get('Resources', [])
})
return deletion_events
EventBridge real-time capture gets deletion events as they happen. An EventBridge rule that matches deletion API calls and routes to Lambda or SQS gives you a deletion log that persists in your own storage, independent of Config retention.
{
"Rules": [
{
"Name": "S3BucketDeletions",
"EventPattern": {
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["DeleteBucket"]
}
},
"Targets": [
{
"Id": "1",
"Arn": "arn:aws:lambda:us-east-1:123456789012:function:ProcessDeletion"
}
]
}
]
}
Custom multi-account Lambda automation schedules regular queries across all accounts using the per-account Config history API, then writes results to a central S3 bucket or DynamoDB table for aggregated queries.
def build_deletion_inventory(organization_accounts):
all_deletions = []
for account in organization_accounts:
account_deletions = find_deleted_resources_per_account(
account_ids=[account['Id']],
regions=['us-east-1', 'us-west-2'],
resource_type='AWS::S3::Bucket',
days_back=30
)
all_deletions.extend(account_deletions)
store_deletion_data(all_deletions)
return all_deletions
Use Config Aggregators for current resource inventory and live compliance monitoring. For deleted resource tracking, you need CloudTrail, EventBridge, or custom automation. These are complementary tools, not substitutes for each other.