Introduction

The cloud has fundamentally changed how organizations build and operate their infrastructure. AWS, Azure, and Google Cloud Platform offer unmatched scalability, flexibility, and speed to market. But this power comes with a critical caveat: the shared responsibility model means that while cloud providers secure the infrastructure itself, the configuration of your environment is entirely your responsibility.

And that is where things go wrong. According to industry reports, misconfigurations are responsible for the vast majority of cloud security incidents — not sophisticated zero-day exploits or nation-state attacks, but simple, avoidable errors in how cloud resources are set up and managed. A misconfigured storage bucket, an overly permissive IAM policy, or a missing encryption setting can expose an entire organization to data theft, ransomware, or regulatory penalties.

At A.KHAT, our cloud security assessments consistently reveal the same patterns across organizations of every size. Whether it is a startup running its first production workload on AWS or a large enterprise managing hundreds of Azure subscriptions, the underlying mistakes are remarkably similar. This article documents the ten most common cloud security mistakes we encounter, explains why each one matters, and provides concrete steps to address them.

The shared responsibility model is the most misunderstood concept in cloud computing. Your cloud provider secures the cloud. You are responsible for securing everything you put in it.

The 10 Most Common Cloud Security Mistakes

#1 Overly Permissive IAM Policies

This is the single most common finding in our cloud assessments, and it appears in virtually every environment we review. The pattern is always the same: developers or administrators need access to get work done quickly, so they grant broad permissions — often full AdministratorAccess or wildcard (*) actions on all resources. What starts as a temporary convenience becomes a permanent security liability.

An IAM policy granting "Action": "*" on "Resource": "*" effectively gives that principal unrestricted access to every service and every resource in the account. If those credentials are compromised through phishing, a leaked access key, or a vulnerable application, the attacker inherits that unrestricted access. We have seen cases where a single compromised service account with wildcard permissions allowed an attacker to exfiltrate data from S3, spin up cryptocurrency miners on EC2, and delete CloudTrail logs to cover their tracks — all within minutes.

How to fix it:

  • Implement the principle of least privilege rigorously. Every user, role, and service account should have only the permissions required for its specific function — nothing more.
  • Use managed policies with scoped permissions rather than inline policies with wildcards. AWS IAM Access Analyzer, Azure AD Privileged Identity Management, and GCP IAM Recommender can all identify over-provisioned permissions.
  • Conduct regular access reviews — quarterly at minimum — to identify and revoke unnecessary permissions. Permissions accumulate over time as roles change, and what was once needed may no longer be relevant.
  • Enforce session time limits and require re-authentication for sensitive operations.

#2 Publicly Exposed Storage Buckets and Blobs

Despite years of high-profile breaches, publicly accessible cloud storage remains alarmingly common. We regularly find S3 buckets, Azure Blob Storage containers, and GCP Cloud Storage buckets configured to allow unauthenticated access — often containing sensitive data such as customer records, application logs with embedded credentials, database backups, and internal documents.

The Capital One breach of 2019, which exposed over 100 million customer records, began with a misconfigured web application firewall but was compounded by excessively permissive S3 bucket policies. More recently, numerous organizations have been found exposing sensitive data through Azure Blob containers with public access enabled by default. These are not theoretical risks — attackers actively scan for publicly accessible cloud storage using automated tools.

How to fix it:

  • Enable S3 Block Public Access at the account level in AWS. In Azure, disable anonymous access at the storage account level. In GCP, use uniform bucket-level access and avoid allUsers or allAuthenticatedUsers permissions.
  • Implement bucket policies that explicitly deny public access and use condition keys to restrict access to specific VPCs, IP ranges, or IAM principals.
  • Use cloud-native tools — AWS Access Analyzer for S3, Azure Defender for Storage, GCP Security Command Center — to continuously monitor for publicly accessible resources.
  • Audit all existing storage resources and classify data stored in each bucket. Sensitive data should never be in a bucket that could be made public, even accidentally.

#3 Missing MFA on Root and Admin Accounts

Multi-factor authentication is one of the most effective security controls available, yet we frequently find cloud environments where the root account, global administrators, or highly privileged service accounts are protected by nothing more than a password. A single compromised password — obtained through phishing, credential stuffing, or a breach of another service where the password was reused — gives an attacker the keys to the entire cloud environment.

The root account in AWS, the Global Administrator role in Azure AD, and the Organization Administrator in GCP have unrestricted power. They can create and delete resources, modify billing, change security settings, and even lock out all other users. Protecting these accounts with only a password is the cloud equivalent of leaving your front door unlocked.

How to fix it:

  • Enforce MFA on every account, starting with root and administrative accounts. Use hardware security keys (FIDO2/WebAuthn) for the highest-privilege accounts rather than SMS or TOTP, which are vulnerable to SIM-swapping and real-time phishing attacks.
  • Never use the root account for day-to-day operations. In AWS, lock the root account's access keys, enable MFA, and use IAM roles instead. In Azure, minimize the number of Global Administrators and use Privileged Identity Management for just-in-time elevation.
  • Implement conditional access policies that require MFA for all administrative actions, access from new devices, or access from outside trusted networks.

#4 Default Security Groups and Firewall Rules

Default security groups and firewall rules are designed for ease of initial setup, not for production security. We routinely find cloud environments where SSH (port 22) or RDP (port 3389) is open to 0.0.0.0/0 — meaning the entire internet can attempt to connect. Database ports (3306 for MySQL, 5432 for PostgreSQL, 27017 for MongoDB) exposed to the public internet are equally common and equally dangerous.

Automated scanners continuously sweep cloud IP ranges for open management ports. An SSH server exposed to the internet will begin receiving brute-force login attempts within minutes of deployment. RDP exposed to the internet is one of the most common initial access vectors for ransomware operators. Even if you are using strong passwords and have patched your systems, the attack surface is unnecessarily large.

How to fix it:

  • Restrict management access to specific, trusted IP ranges or, better yet, require VPN or bastion host access for SSH and RDP. No management port should ever be open to 0.0.0.0/0.
  • Use security groups as allowlists, not blocklists. Start with denying everything and explicitly allow only the traffic that is required.
  • Implement network tiering: place web-facing resources in public subnets with tightly scoped security groups, and place databases, application servers, and internal services in private subnets with no direct internet access.
  • Use AWS Systems Manager Session Manager, Azure Bastion, or GCP Identity-Aware Proxy to provide secure, auditable access to instances without exposing management ports at all.

#5 Unencrypted Data at Rest and in Transit

Despite encryption being straightforward to enable in all major cloud platforms, we frequently find databases, storage volumes, and message queues operating without encryption at rest. We also encounter internal services communicating over plain HTTP within VPCs, under the assumption that internal traffic does not need encryption. Both assumptions are dangerous.

Data at rest without encryption is vulnerable to unauthorized access if storage media is compromised, if snapshots or backups are inadvertently shared, or if an attacker gains access to the underlying storage layer. Unencrypted internal traffic is vulnerable to interception if an attacker gains a foothold within the network — a scenario that is far from hypothetical in environments with flat network architectures.

How to fix it:

  • Enable encryption by default for all storage services. In AWS, enable default EBS encryption at the account level, use SSE-S3 or SSE-KMS for S3 buckets, and enable encryption for RDS instances and EFS file systems. Azure and GCP offer equivalent settings for their respective storage services.
  • Enforce TLS for all communications, including internal service-to-service traffic. Use service mesh solutions (Istio, Linkerd) or cloud-native options (AWS App Mesh, Azure Service Fabric) to implement mutual TLS without modifying application code.
  • Use customer-managed keys (CMKs) in AWS KMS, Azure Key Vault, or GCP Cloud KMS for sensitive workloads, allowing you to control key rotation and access policies independently of the cloud provider.
  • Implement policies that prevent the creation of unencrypted resources — AWS Service Control Policies, Azure Policy, or GCP Organization Policies can enforce encryption requirements across all accounts and projects.

#6 No Logging or Monitoring

You cannot detect what you cannot see. Yet we frequently encounter cloud environments where audit logging is partially or entirely disabled. AWS CloudTrail disabled or limited to management events only. Azure Activity Log with default 90-day retention and no export to long-term storage. GCP Cloud Audit Logs not configured for data access logging. No centralized log aggregation. No alerts. No one watching.

Without comprehensive logging, an attacker who gains access to your environment can operate undetected for weeks or months. When a breach is eventually discovered, the lack of logs makes forensic investigation nearly impossible — you cannot determine what was accessed, what was exfiltrated, or how the attacker got in. This is a compliance failure under virtually every regulatory framework, including GDPR and NIS2.

How to fix it:

  • Enable all audit logging across every account and region. In AWS, enable CloudTrail with data events for S3 and Lambda, enable VPC Flow Logs, and enable GuardDuty. In Azure, configure Diagnostic Settings to export Activity Logs and enable Microsoft Defender for Cloud. In GCP, enable Data Access audit logs for all services.
  • Centralize logs in a dedicated security account or project that is isolated from production workloads. Use immutable storage (S3 Object Lock, Azure immutable blob storage) to prevent attackers from deleting logs.
  • Set up alerts for critical events: root account usage, IAM policy changes, security group modifications, failed authentication attempts, resource creation in unexpected regions, and any changes to logging configuration itself.
  • Retain logs for a minimum of one year, or longer as required by your regulatory obligations.

#7 Hardcoded Secrets in Code and Configuration

API keys, database passwords, service account credentials, and encryption keys embedded directly in application code, configuration files, environment variables baked into container images, or — worst of all — committed to version control. This is one of the most dangerous practices we encounter, and it is disturbingly common.

Once a secret is committed to a Git repository, it lives in the repository's history permanently, even if the offending commit is later amended or reverted. Public GitHub repositories are continuously scanned by automated tools — both by security researchers and by attackers. AWS reports that exposed access keys on GitHub are typically exploited within minutes. Even private repositories are at risk if access controls are not properly managed or if a developer's account is compromised.

How to fix it:

  • Use secrets management services: AWS Secrets Manager or AWS Systems Manager Parameter Store, Azure Key Vault, or GCP Secret Manager. These services provide secure storage, automatic rotation, fine-grained access control, and audit logging for all secrets.
  • Implement pre-commit hooks that scan for secrets before code is committed. Tools like git-secrets, truffleHog, and detect-secrets can catch most common patterns (API keys, passwords, private keys) before they enter version control.
  • Use IAM roles and workload identity instead of long-lived credentials wherever possible. EC2 instance roles, ECS task roles, Azure Managed Identities, and GCP Workload Identity Federation eliminate the need to manage secrets for service-to-service authentication entirely.
  • Rotate all secrets immediately if you discover they have been exposed, and audit access logs to determine whether they were used by an unauthorized party.

#8 Neglecting Container Security

Containers have become the default deployment model for modern applications, but security practices have not kept pace with adoption. We commonly find containers running as root, built from unpatched or unofficial base images, deployed without resource limits, and operating without any runtime security monitoring.

A container running as root inside a pod or task has significantly greater potential for damage if compromised. Container escape vulnerabilities, while less common than they once were, still appear regularly — and a root container on a shared host can potentially access other tenants' data. Outdated base images carry known vulnerabilities that attackers can exploit. Without image scanning, you are deploying code with known security holes into production.

How to fix it:

  • Run containers as non-root by default. Set runAsNonRoot: true in Kubernetes security contexts. Use the USER directive in Dockerfiles. Drop all Linux capabilities and add back only what is required.
  • Scan container images for vulnerabilities as part of your CI/CD pipeline. AWS ECR image scanning, Azure Defender for container registries, and GCP Artifact Analysis provide native scanning. Third-party tools like Trivy, Grype, and Snyk Container offer additional coverage.
  • Use minimal base images (Alpine, distroless, scratch) to reduce the attack surface. Every package in a container image is a potential vulnerability.
  • Implement admission controllers (OPA Gatekeeper, Kyverno) in Kubernetes to enforce security policies and prevent the deployment of non-compliant images.
  • Enable runtime security monitoring to detect anomalous behavior inside running containers — unexpected processes, network connections, or file system modifications.

#9 Missing Network Segmentation

Flat networks — where every resource can communicate with every other resource — are one of the most dangerous architectural patterns in cloud environments. We frequently find organizations that have migrated to the cloud but replicated the flat network architecture of their on-premises data center, placing all workloads in a single VPC or virtual network with no internal segmentation.

In a flat network, an attacker who compromises a single resource — a web server, a developer workstation, a CI/CD runner — can immediately begin lateral movement to reach databases, internal APIs, management systems, and other high-value targets. Network segmentation limits the blast radius of a compromise by restricting which resources can communicate with each other.

How to fix it:

  • Separate workloads into distinct VPCs or virtual networks based on environment (production, staging, development) and sensitivity level. Use VPC peering or transit gateways for controlled inter-VPC communication.
  • Use private subnets for databases, application servers, and internal services. Only web-facing load balancers and reverse proxies should reside in public subnets.
  • Implement microsegmentation using security groups and network ACLs to restrict traffic between individual resources. A web server should be able to reach its database but not the database of an unrelated application.
  • Use PrivateLink (AWS), Private Endpoints (Azure), or Private Service Connect (GCP) to access cloud services without traversing the public internet.
  • Deploy network monitoring and anomaly detection to identify unexpected traffic patterns that could indicate lateral movement.

#10 No Incident Response Plan for Cloud

Many organizations have incident response plans, but those plans were written for on-premises environments and do not account for the unique characteristics of cloud infrastructure. When a cloud-specific incident occurs — compromised access keys, a crypto-mining attack, data exfiltration through a misconfigured API — the response team often does not know where to look, what tools to use, or how to contain the threat effectively in a cloud context.

Cloud incidents move fast. An attacker with stolen AWS credentials can programmatically create resources, exfiltrate data, and cover their tracks across multiple regions in minutes. Without a rehearsed, cloud-specific response plan, organizations waste critical time trying to figure out what happened rather than containing the damage.

How to fix it:

  • Develop cloud-specific incident response playbooks that address common cloud attack scenarios: compromised credentials, unauthorized resource creation, data exfiltration, ransomware affecting cloud workloads, and supply chain attacks through compromised dependencies.
  • Define roles and responsibilities for cloud incident response. Ensure your team knows who has the authority and the access to disable compromised accounts, isolate affected resources, and preserve forensic evidence in cloud environments.
  • Practice regularly with tabletop exercises and simulated incidents. Test your team's ability to detect, investigate, and respond to cloud-specific scenarios under time pressure.
  • Prepare tooling in advance: ensure you have the scripts, runbooks, and access necessary to perform common response actions (revoking access keys, isolating instances, capturing memory dumps, preserving log data) before an incident occurs.
  • Establish relationships with your cloud provider's security response teams (AWS Support, Microsoft Security Response Center, Google Cloud security team) and know how to engage them during an active incident.

Cloud Security Checklist

Use this quick-reference checklist to evaluate your cloud security posture. Each item maps to one or more of the mistakes discussed above.

Cloud Security Quick-Reference Checklist

  • All IAM policies follow the principle of least privilege — no wildcard permissions
  • Regular access reviews are conducted quarterly (at minimum)
  • Block Public Access is enabled at the account/organization level for all storage services
  • MFA is enforced on all accounts, with hardware keys on root/admin accounts
  • Root/super-admin accounts are locked down and not used for daily operations
  • No management ports (SSH, RDP) are open to 0.0.0.0/0
  • All access to instances is through bastion hosts, VPN, or cloud-native solutions
  • Encryption at rest is enabled by default for all storage and database services
  • TLS is enforced for all communications, including internal service-to-service traffic
  • CloudTrail / Activity Log / Cloud Audit Logs are enabled in all regions and accounts
  • Logs are centralized, immutable, and retained for at least one year
  • Alerts are configured for critical security events
  • No secrets are hardcoded in source code, configuration files, or container images
  • A secrets management service is in use for all credentials and API keys
  • Pre-commit hooks scan for accidentally committed secrets
  • Container images are scanned for vulnerabilities before deployment
  • Containers run as non-root with minimal capabilities
  • Network segmentation separates environments and workloads
  • Databases and internal services reside in private subnets only
  • A cloud-specific incident response plan exists and has been tested

How A.KHAT Helps

At A.KHAT, we specialize in identifying and remediating the cloud security gaps that put organizations at risk. Our team conducts thorough cloud security assessments across AWS, Azure, and GCP, combining automated tooling with manual expert review to uncover misconfigurations that scanners alone will miss.

Our cloud security services include:

  • Cloud Security Assessment — A comprehensive review of your cloud environment's configuration, IAM policies, network architecture, encryption posture, logging setup, and incident readiness. We deliver a prioritized report with actionable remediation guidance.
  • Cloud Penetration Testing — Authorized security testing that simulates real-world attacks against your cloud infrastructure, applications, and APIs to identify exploitable vulnerabilities.
  • IAM and Access Review — Detailed analysis of your identity and access management configuration to identify over-provisioned permissions, unused accounts, and policy violations.
  • Cloud Architecture Review — Evaluation of your cloud architecture against security best practices and industry frameworks (CIS Benchmarks, AWS Well-Architected, Azure Security Benchmark) with recommendations for hardening.
  • Incident Response Planning — Development and testing of cloud-specific incident response playbooks tailored to your environment and team.

Secure Your Cloud Infrastructure

Contact us for a cloud security assessment. We will identify your exposures and provide a clear remediation roadmap.

Request an Assessment

Conclusion

Cloud security is not fundamentally different from traditional security — the same principles of least privilege, defense in depth, encryption, monitoring, and incident preparedness still apply. What is different is the speed at which mistakes can be made and exploited. A single Terraform apply, a careless IAM policy change, or a storage bucket left public for "just a few minutes" can expose your entire organization.

The good news is that every mistake in this article is fixable. Most can be addressed with configuration changes and policy enforcement rather than expensive new tools. The cloud providers themselves offer robust security features — the challenge is ensuring they are consistently enabled, properly configured, and regularly audited.

Start with the checklist above. Identify your gaps. Prioritize based on risk. And if you need expert help, reach out. Cloud security is not something you get right once and forget — it is an ongoing practice that requires continuous attention, regular testing, and a willingness to learn from both your own findings and the mistakes of others.

The most expensive cloud security measure is the one you implement after a breach. The most effective is the one you implement before.