By default, the CloudWatch agent monitors every mounted filesystem on the instance. On a container host or a system with network mounts, that means dozens of overlay, tmpfs, nfs, and pseudo-filesystem entries alongside the actual storage you care about. Every one of those generates a metric, runs a collection cycle, and contributes to your CloudWatch bill.
The fix is a single ignore_file_system_types list in the agent config. Here’s the complete one:
{
"metrics": {
"metrics_collected": {
"disk": {
"measurement": ["used_percent"],
"ignore_file_system_types": [
"9p", "afs", "amazon-efs-client-utils", "amazon-ssm-agent",
"anon_inodefs", "aufs", "autofs", "bdev", "beegfs", "binfmt_misc",
"bindfs", "btrfs-subvol", "cephfs", "cgroup", "cgroup2", "cifs",
"cluster", "coda", "configfs", "cramfs", "curlftpfs", "davfs2",
"dax", "debugfs", "devicemapper", "devpts", "devtmpfs", "dlm",
"drbd", "ecryptfs", "efs", "efs-utils", "efivarfs", "encfs",
"exfat", "fat", "fat32", "fsx", "fuse", "fuseblk", "gfs", "gfs2",
"glusterfs", "goofys", "gpfs", "hfs", "hfsplus", "hugetlbfs",
"iscsi", "iso9660", "jffs2", "lustre", "loop*", "mountpoint-s3",
"mqueue", "msdos", "multipath", "ncpfs", "nfs", "nfs4", "nfsd",
"nsfs", "ntfs", "ntfs-3g", "ocfs2", "orangefs", "overlay",
"overlay2", "pipefs", "proc", "pstore", "pvfs2", "ramfs",
"romfs", "rootfs", "rpc_pipefs", "s3fs", "s3fs-fuse",
"securityfs", "selinuxfs", "shm", "smb", "smb2", "smb3", "smbfs",
"sockfs", "squashfs", "sshfs", "sunrpc", "sysfs", "systemd-1",
"tmpfs", "tracefs", "ubifs", "udev", "udf", "unionfs-fuse",
"vfat", "vmhgfs", "xenfs"
]
}
}
}
}
Why each category is in there
Virtual and pseudo filesystems (proc, sysfs, devpts, cgroup, cgroup2, tmpfs, ramfs, devtmpfs, securityfs, selinuxfs, debugfs, tracefs, pstore, efivarfs, configfs, binfmt_misc, mqueue, sockfs, pipefs, anon_inodefs, nsfs, hugetlbfs) — these are kernel interfaces and memory-based constructs. They don’t represent persistent storage. Monitoring used_percent on tmpfs tells you nothing actionable about disk capacity.
Network filesystems (nfs, nfs4, cifs, smb, smb2, smb3, smbfs, ncpfs, coda, afs, 9p, beegfs, glusterfs, gpfs, gfs, gfs2, ocfs2, orangefs, pvfs2, lustre, drbd) — querying network mounts from the CloudWatch agent adds latency to every collection cycle. If a mount is unavailable, you get timeouts. Monitor network filesystems at the storage layer, not from every client.
AWS managed storage (efs, efs-utils, amazon-efs-client-utils, fsx, mountpoint-s3, amazon-ssm-agent) — EFS and FSx have their own CloudWatch metrics. Monitoring them through the disk agent creates duplicate data and charges you twice. mountpoint-s3 shows object storage through a FUSE layer; the meaningful metrics are on the S3 side.
Object storage FUSE drivers (s3fs, s3fs-fuse, goofys, curlftpfs, davfs2, sshfs, encfs, bindfs, unionfs-fuse) — same reason as above. These front API-backed storage. Disk utilization metrics from FUSE drivers are misleading and expensive to collect.
Container overlay systems (overlay, overlay2, aufs, devicemapper, loop*, squashfs) — on a container host, every running container has an overlay mount. A node running 50 containers generates 50+ overlay filesystem entries. None of those represent real disk pressure you need to alert on — the host filesystem is what fills up.
Removable and legacy formats (iso9660, udf, fat, fat32, vfat, exfat, msdos, ntfs, ntfs-3g, hfs, hfsplus, jffs2, ubifs, romfs, cramfs, xenfs, vmhgfs) — optical discs, USB drives, Windows partitions. Unlikely to appear in production EC2, and not what you’re alerting on if they do.
Cluster and distributed filesystems (cephfs, glusterfs, gpfs, ocfs2, cluster, multipath) — these have dedicated monitoring. Watching them from each node through the CloudWatch agent is redundant.
What remains after filtering
After applying this list, you’ll typically see only:
/— the root filesystem, usuallyext4orxfs/data,/var,/home,/opt— any explicitly mounted data volumes- Application-specific mounts you’ve created
Those are the ones that fill up. Those are the ones you want alerts on.
Check before you deploy
Before applying in production, verify what’s actually mounted:
df -T
Look at the Type column. Anything that shows up as a type you want to monitor should not be in your exclude list. The common production filesystems — ext4, xfs, btrfs — are not in the list above and will be monitored by default.
Complete config with collection interval
{
"disk": {
"measurement": ["used_percent", "inodes_used"],
"metrics_collection_interval": 300,
"ignore_file_system_types": [
"9p", "afs", "amazon-efs-client-utils", "amazon-ssm-agent",
"anon_inodefs", "aufs", "autofs", "bdev", "beegfs", "binfmt_misc",
"bindfs", "btrfs-subvol", "cephfs", "cgroup", "cgroup2", "cifs",
"cluster", "coda", "configfs", "cramfs", "curlftpfs", "davfs2",
"dax", "debugfs", "devicemapper", "devpts", "devtmpfs", "dlm",
"drbd", "ecryptfs", "efs", "efs-utils", "efivarfs", "encfs",
"exfat", "fat", "fat32", "fsx", "fuse", "fuseblk", "gfs", "gfs2",
"glusterfs", "goofys", "gpfs", "hfs", "hfsplus", "hugetlbfs",
"iscsi", "iso9660", "jffs2", "lustre", "loop*", "mountpoint-s3",
"mqueue", "msdos", "multipath", "ncpfs", "nfs", "nfs4", "nfsd",
"nsfs", "ntfs", "ntfs-3g", "ocfs2", "orangefs", "overlay",
"overlay2", "pipefs", "proc", "pstore", "pvfs2", "ramfs",
"romfs", "rootfs", "rpc_pipefs", "s3fs", "s3fs-fuse",
"securityfs", "selinuxfs", "shm", "smb", "smb2", "smb3", "smbfs",
"sockfs", "squashfs", "sshfs", "sunrpc", "sysfs", "systemd-1",
"tmpfs", "tracefs", "ubifs", "udev", "udf", "unionfs-fuse",
"vfat", "vmhgfs", "xenfs"
]
}
}
Applying the config
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
-a fetch-config -m ec2 \
-c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -s
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
-m ec2 -a query-config
The CloudWatch agent doesn't know which of your mounted filesystems matter. You do. The filter list is how you tell it. Without it, you're paying for metrics on kernel interfaces and container layers while your actual data volume fills up silently.