Get Started
- CodeAnt AI
- Control Center
- Pull Request Review
- IDE
- Compliance
- Anti-Patterns
- Code Governance
- Infrastructure Security Database
- Application Security Database
Terraform
logs service records administrative activities and accesses to Google Cloud resources of the project. It is important to enable audit logs to be able to investigate malicious activities in the event of a security incident.
Some project members may be exempted from having their activities recorded in the Google Cloud audit log service, creating a blind spot and reducing the capacity to investigate future security events.
offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.
An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.
This rule raises an issue when one of the following roles is assigned:
Application Administrator
Authentication Administrator
Cloud Application Administrator
Global Administrator
Groups Administrator
Helpdesk Administrator
Password Administrator
Privileged Authentication Administrator
Privileged Role Administrator
User Administrator
tances offer encryption in transit, with support for TLS, but insecure connections are still accepted. On an unsecured network, such as a public network, the risk of traffic being intercepted is high. When the data isn’t encrypted, an attacker can intercept it and read confidential information.
When creating a GCP SQL instance, a public IP address is automatically assigned to it and connections to the SQL instance from public networks can be authorized.
TLS is automatically used when connecting to SQL instances through:
The Cloud SQL Auth proxy.
The Java Socket Library.
The built-in mechanisms in the App Engine environments.
ged resource rights to users or groups can reduce an organization’s ability to protect against account or service theft. It prevents proper segregation of duties and creates potentially critical attack vectors on affected resources.
If elevated access rights are abused or compromised, both the data that the affected resources work with and their access tracking are at risk.
CP IAM permissions can allow attackers to exploit an organization’s cloud resources with malicious intent.
To prevent improper creation or deletion of resources after an account is compromised, proactive measures include both following GCP Security Insights and ensuring custom roles contain as few privileges as possible.
After gaining a foothold in the target infrastructure, sophisticated attacks typically consist of two major parts. First, attackers must deploy new resources to carry out their malicious intent. To guard against this, operations teams must control what unexpectedly appears in the infrastructure, such as what is:
added
written down
updated
started
appended
applied
accessed.
Once the malicious intent is executed, attackers must avoid detection at all costs. To counter attackers’ attempts to remove their fingerprints, operations teams must control what unexpectedly disappears from the infrastructure, such as what is:
stopped
disabled
canceled
deleted
destroyed
detached
disconnected
suspended
rejected
removed.
For operations teams to be resilient in this scenario, their organization must apply both:
Detection security: log these actions to better detect malicious actions.
Preventive security: review and limit granted permissions.
This rule raises an issue when a custom role grants a number of sensitive permissions (read-write or destructive permission) that is greater than a given parameter.
There are three managed profiles to choose from: `COMPATIBLE (default), MODERN and RESTRICTED:
The RESTRICTED profile supports a reduced set of cryptographic algorithms, intended to meet stricter compliance requirements.
The MODERN profile supports a wider set of cryptographic algorithms, allowing most modern clients to negotiate TLS.
The COMPATIBLE profile supports the widest set of cryptographic algorithms, allowing connections from older client applications.
The MODERN and COMPATIBLE` profiles allow the use of older cryptographic algorithms that are no longer considered secure and are susceptible to attack.
zation, Attribute-Based Access Control (ABAC), on Google Kubernetes Engine resources can reduce an organization’s ability to protect itself against access controls being compromised.
For Kubernetes, Attribute-Based Access Control has been superseded by Role-Based Access Control. ABAC is not under active development anymore and thus should be avoided.
ing is enabled it’s possible to add an additional authentication factor before being allowed to delete versions of an object or changing the versioning state of a bucket. It prevents accidental object deletion by forcing the user sending the delete request to prove that he has a valid MFA device and a corresponding valid token.
S) are vulnerable by default to various types of attacks.
One of the biggest risks is DNS cache poisoning, which occurs when a DNS accepts spoofed DNS data, caches the malicious records, and potentially sends them later in response to legitimate DNS request lookups. This attack typically relies on the attacker’s MITM ability on the network and can be used to redirect users from an intended website to a malicious website.
To prevent these vulnerabilities, Domain Name System Security Extensions (DNSSEC) ensure the integrity and authenticity of DNS data by digitally signing DNS zones.
The public key of a DNS zone used to validate signatures can be trusted as DNSSEC is based on the following chain of trust:
The parent DNS zone adds a “fingerprint” of the public key of the child zone in a “DS record”.
The parent DNS zone signs it with its own private key.
And this process continues until the root zone.
hat allow privilege escalation can allow attackers to maliciously exploit an organization’s cloud resources.
Certain GCP permissions allow impersonation of one or more privileged principals within a GCP infrastructure. To prevent privilege escalation after an account has been compromised, proactively follow GCP Security Insights and ensure that custom roles contain as few privileges as possible that allow direct or indirect impersonation.
For example, privileges like deploymentmanager.deployments.create allow impersonation of service accounts, even if the name does not sound like it. Other privileges like setIamPolicy, which are more explicit, directly allow their holder to extend their privileges.
After gaining a foothold in the target infrastructure, sophisticated attackers typically map their newfound roles to understand what is exploitable.
The riskiest privileges are either:
At the infrastructure level: privileges to perform project, folder, or organization-wide administrative tasks.
At the resource level: privileges to perform resource-wide administrative tasks.
In either case, the following privileges should be avoided or granted only with caution:
..setIamPolicy
cloudbuilds.builds.create
cloudfunctions.functions.create
cloudfunctions.functions.update
cloudscheduler.jobs.create
composer.environments.create
compute.instances.create
dataflow.jobs.create
dataproc.clusters.create
deploymentmanager.deployments.create
iam.roles.update
iam.serviceAccountKeys.create
iam.serviceAccounts.actAs
iam.serviceAccounts.getAccessToken
iam.serviceAccounts.getOpenIdToken
iam.serviceAccounts.implicitDelegation
iam.serviceAccounts.signBlob
iam.serviceAccounts.signJwt
orgpolicy.policy.set
run.services.create
serviceusage.apiKeys.create
serviceusage.apiKeys.list
storage.hmacKeys.create
ity incidents increases when cryptographic keys are used for a long time. Thus, to strengthen the data protection it’s recommended to rotate the symmetric keys created with the Google Cloud Key Management Service (KMS) automatically and periodically. Note that it’s not possible in GCP KMS to rotate asymmetric keys automatically.
for Google Cloud Storage (GCS) buckets is enabled, different versions of an object are stored in the bucket, preventing accidental deletion. A specific version can always be deleted when the generation number of an object version is specified in the request.
Object versioning cannot be enabled on a bucket with a retention policy. A retention policy ensures that an object is retained for a specific period of time even if a request is made to delete or replace it. Thus, a retention policy locks the single current version of an object in the bucket, which differs from object versioning where different versions of an object are retained.
to GCP resources may reduce an organization’s ability to protect itself against attacks or theft of its GCP resources. Security incidents associated with misuse of public access include disruption of critical functions, data theft, and additional costs due to resource overload.
To be as prepared as possible in the event of a security incident, authentication combined with fine-grained permissions helps maintain the principle of defense in depth and trace incidents back to the perpetrators.
GCP also provides the ability to grant access to a large group of people:
If public access is granted to all Google users, the impact of a data theft is the same as if public access is granted to all Internet users.
If access is granted to a large Google group, the impact of a data theft is limited based on the size of the group.
The only thing that changes in these cases is the ability to track user access in the event of an incident.
aged in a project’s metadata can be used to access GCP VM instances. By default, GCP automatically deploys project-level SSH keys to VM instances.
Project-level SSH keys can lead to unauthorized access because:
Their use prevents fine-grained VM-level access control and makes it difficult to follow the principle of least privilege.
Unlike managed access control with OS Login, manual cryptographic key management is error-prone and requires careful attention. For example, if a user leaves a project, their SSH keys should be removed from the metadata to prevent unwanted access.
If a project-level SSH key is compromised, all VM instances may be compromised.
ryption in transit through TLS. As soon as the app is deployed, it can be requested using appspot.com domains or custom domains. By default, endpoints accept both clear-text and encrypted traffic. When communication isn’t encrypted, there is a risk that an attacker could intercept it and read confidential information.
When creating an App Engine, request handlers can be set with different security level for encryption:
SECURE_NEVER: only HTTP requests are allowed (HTTPS requests are redirected to HTTP).
SECURE_OPTIONAL and SECURE_DEFAULT: both HTTP and HTTPS requests are allowed.
SECURE_ALWAYS: only HTTPS requests are allowed (HTTP requests are redirected to HTTPS).
Allowing anonymous access can reduce an organization’s ability to protect itself against attacks on its Azure resources.
Security incidents may include disrupting critical functions, data theft, and additional Azure subscription costs due to resource overload.
Using authentication coupled with fine-grained authorizations helps bring defense-in-depth and bring traceability to investigators of security incidents.
Depending on the affected Azure resource, multiple authentication choices are possible: Active Directory Authentication, OpenID implementations (Google, Microsoft, etc.) or native Azure mechanisms.
Resource-based policies granting access to all users can lead to information leakage.
By default, S3 buckets can be accessed through HTTP and HTTPs protocols.
As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.
A well-structured tagging strategy is essential when working with AWS resources. Inadequate tagging practices can lead to several potential issues and make it challenging to manage your AWS environment effectively:
When resources lack proper tags or have inconsistent tagging, it becomes difficult to identify their purpose, owner, or role within the infrastructure. This ambiguity can lead to confusion and errors during resource management and allocation. Without clear and consistent tags, teams may struggle to understand the resource’s function, hindering collaboration and efficiency.
Effective tagging is crucial for monitoring and managing costs in AWS. Resources without appropriate tags may not be adequately categorized, making tracking their usage and associated expenses hard. As a result, it becomes challenging to allocate costs to specific projects, departments, or teams accurately. Poor cost visibility can lead to overspending, budgeting issues, and difficulty in optimizing resource allocation.
Tags play a significant role in resource security and compliance. Inadequate tagging can result in incorrectly classified resources, leading to potential security vulnerabilities and compliance risks. It becomes challenging to apply consistent security policies, control access, and track changes without proper tagging. This can leave the AWS environment more susceptible to unauthorized access and compliance violations.
Automation and governance rely on well-defined tags to enforce policies and ensure consistent resource management. Inadequate tagging practices can hinder automation efforts, making it challenging to automate resource provisioning, scaling, and deprovisioning. Additionally, insufficient tags can lead to governance challenges, making it harder to enforce standardized policies and configurations across resources.
Tags enable efficient resource search and filtering in the AWS Management Console and API. When tags are missing, inconsistent, or irrelevant, locating specific resources becomes cumbersome. Teams may need to resort to manual searches or resort to resource-naming conventions, defeating the purpose of tags. The lack of well-organized tags can increase the time and effort required for resource discovery and impact productivity.
Inadequate tagging practices can also impede resource lifecycle management. It becomes harder to track when resources were created, their purpose, and whether they are still in use. Without this vital information, it becomes challenging to identify and delete unused or deprecated resources, leading to resource sprawl and increased costs.
In summary, an inadequate tagging strategy in AWS resources can lead to difficulties in resource identification, cost management, security, automation, and resource lifecycle management. It is crucial to establish a well-organized tagging approach to mitigate these potential issues and efficiently manage your AWS environment. In the following section, we will explore how to fix this code smell by adopting best practices for tagging AWS resources.
Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.
Depending on the component, inbound access from the Internet can be enabled via:
a boolean value that explicitly allows access to the public network.
the assignment of a public IP address.
database firewall rules that allow public IP ranges.
Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.
This decision increases the likelihood of attacks on the organization, such as:
data breaches.
intrusions into the infrastructure to permanently steal from it.
and various malicious traffic, such as DDoS attacks.
Using unencrypted RDS DB resources exposes data to unauthorized access. This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.
This situation can occur in a variety of scenarios, such as:
A malicious insider working at the cloud provider gains physical access to the storage device.
Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.
After a successful intrusion, the underlying applications are exposed to:
theft of intellectual property and/or personal data
extortion
denial of services and security bypasses via data corruption or deletion
AWS-managed encryption at rest reduces this risk with a simple switch.
Disabling logging of this component can lead to missing traceability in case of a security incident.
Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.
Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.
Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.
Azure Resource Manager offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.
An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.
This rule raises an issue when one of the following roles is assigned:
Contributor (b24988ac-6180-42a0-ab88-20f7382dd24c)
Owner (8e3af657-a8ff-443c-a75c-2fe8c4bcb635)
User Access Administrator (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9)
A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.
S3 buckets can be in three states related to versioning:
unversioned (default one)
enabled
suspended
When the S3 bucket is unversioned or has versioning suspended it means that a new version of an object overwrites an existing one in the S3 bucket.
It can lead to unintentional or intentional information loss.
Clear-text protocols such as `ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:
sensitive data exposure
traffic redirected to a malicious endpoint
malware-infected software update or installer
execution of client-side code
corruption of critical information
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.
For example, attackers could successfully compromise prior security layers by:
bypassing isolation mechanisms
compromising a component of the network
getting the credentials of an internal IAM account (either from a service account or an actual person)
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.
Note that using the http` protocol is being deprecated by major web browsers.
In the past, it has led to the following vulnerabilities:
Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.
There are three SSE options:
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
AWS manages encryption keys and the encryption itself (with AES-256) on its own.
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
Server-Side Encryption with Customer-Provided Keys (SSE-C)
AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.
Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.
Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.
Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.
Enabling Azure resource-specific admin accounts can reduce an organization’s ability to protect itself against account or service account thefts.
Full Administrator permissions fail to correctly separate duties and create potentially critical attack vectors on the impacted resources.
In case of abuse of elevated permissions, both the data on which impacted resources operate and their access traceability are at risk.
Using unencrypted cloud storages can lead to data exposure. In the case that adversaries gain physical access to the storage medium they are able to access unencrypted information.
Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.
Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API. This means attacks both on the functionality provided by the API and its infrastructure.
Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident.
Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident.
Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity.
Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.
The following canned ACLs are security-sensitive:
`PublicRead, PublicReadWrite grant respectively “read” and “read and write” privileges to everyone in the world (AllUsers group).
AuthenticatedRead grants “read” privilege to all authenticated users (AuthenticatedUsers` group).
Disabling Managed Identities can reduce an organization’s ability to protect itself against configuration faults and credential leaks.
Authenticating via managed identities to an Azure resource solely relies on an API call with a non-secret token. The process is inner to Azure: secrets used by Azure are not even accessible to end-users.
In typical scenarios without managed identities, the use of credentials can lead to mistakenly leaving them in code bases. In addition, configuration faults may also happen when storing these values or assigning them permissions.
By transparently taking care of the Azure Active Directory authentication, Managed Identities allow getting rid of day-to-day credentials management.
Defining a short log retention duration can reduce an organization’s ability to backtrace the actions of malicious actors in case of a security incident.
Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.
Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.
Developers often use TODO tags to mark areas in the code where additional work or improvements are needed but are not implemented immediately. However, these TODO tags sometimes get overlooked or forgotten, leading to incomplete or unfinished code. This rule aims to identify and address unattended TODO tags to ensure a clean and maintainable codebase. This description explores why this is a problem and how it can be fixed to improve the overall code quality.
A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.
Disabling Role-Based Access Control (RBAC) on Azure resources can reduce an organization’s ability to protect itself against access controls being compromised.
To be considered safe, access controls must follow the principle of least privilege and correctly segregate duties amongst users. RBAC helps enforce these practices by adapting the organization’s access control needs into explicit role-based policies: It helps keeping access controls maintainable and sustainable.
Furthermore, RBAC allows operations teams to work faster during a security incident. It helps to mitigate account theft or intrusions by quickly shutting down accesses.
Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.
By default S3 buckets are private, it means that only the bucket owner can access it.
This access control can be relaxed with ACLs or policies.
To prevent permissive policies to be set on a S3 bucket the following settings can be configured:
`BlockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
IgnorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
BlockPublicPolicy: to block or not public policies to be set to the S3 bucket.
RestrictPublicBuckets`: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.
Azure RBAC roles can be assigned to users, groups, or service principals. A role assignment grants permissions on a predefined set of resources called “scope”.
The widest scopes a role can be assigned to are:
Subscription: a role assigned with this scope grants access to all resources of this Subscription.
Management Group: a scope assigned with this scope grants access to all resources of all the Subscriptions in this Management Group.
In case of security incidents involving a compromised identity (user, group, or service principal), limiting its role assignment to the narrowest scope possible helps separate duties and limits what resources are at risk.
Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic. Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.