The core components of a landing zone– Cloud Security Fundamentals

The primary goal of a landing zone is to ensure consistent deployment and governance across various environments, such as production (Prod), quality assurance (QA), user acceptance testing (UAT), and development (Dev). Let us understand the core concepts associated with landing zones:

  • Network segmentation: Network segmentation is a critical aspect of a landing zone architecture, and it involves dividing the cloud environment into distinct network segments to ensure isolation and security between different environments and workloads. Each environment (Prod, QA, UAT, and Dev) has a dedicated network segment. These segments are logically separated to prevent unauthorized access between environments. Network segmentation ensures that activities in one environment do not impact others and that sensitive data is adequately protected.
  • Isolation of environments: The network segments for each environment are isolated from each other to minimize the risk of data breaches or unauthorized access. This can be achieved through various means, such as Virtual Private Clouds (VPCs) in AWS, Virtual Networks (VNets) in Azure, or VPCs in GCP.
  • Connectivity between environments: While isolation is crucial, there are specific scenarios where controlled connectivity is required between environments, such as data migration or application integration. This connectivity should be strictly controlled and monitored to avoid security risks.
  • Identity and access management (IAM): IAM policies and roles are implemented to regulate access to cloud resources within each environment. This ensures that only authorized users have access to specific resources based on their roles and responsibilities.
  • Security measures: Each landing zone environment should have security measures, including firewall rules, security groups, network access control lists (NACLs), and other security-related settings. This helps safeguard resources and data from potential threats.
  • Centralized governance: A landing zone architecture also implements centralized governance and monitoring to maintain consistency, compliance, and visibility across all environments. This involves using a central management account or a shared services account for common services.
  • Resource isolation: Within each environment, further resource isolation can be achieved by using resource groups (Azure), projects (GCP), or organizational units (AWS) to logically group resources and manage access control more effectively.
  • Monitoring and auditing: To maintain the health and security of the landing zone, comprehensive monitoring and auditing practices should be implemented. This includes monitoring for suspicious activities, resource utilization, and compliance adherence.

Overall, a landing zone architecture provides a solid foundation for an organization’s cloud deployment by enforcing security, governance, and network segmentation across different environments. This architecture is cloud provider-agnostic and can be adapted to various cloud platforms such as Azure, AWS, and GCP while following their respective best practices and services. To read more about it, you can search for Cloud Adoption Framework, followed by the cloud provider’s name, via your favorite search engine – you will get plenty of resources.

Summary

Cloud security is an interesting topic and fun to learn. I hope you enjoyed it as much as I enjoyed writing some of these fundamental concepts. In this chapter, we introduced you to some important security and compliance concepts. This included shared responsibility in cloud security, encryption and its relevance in a cloud environment, compliance concepts, the Zero Trust model and its foundational pillars, and some of the most important topics related to cryptography. Finally, you were introduced to CAF and landing zones. All the terms and concepts discussed in this chapter will be referred to throughout this book. I encourage you to deep dive into these topics as much as you can.

In the next chapter, we will learn about cloud security posture management (CSPM) and the important concepts around it. Happy learning!

Further reading

To learn more about the topics that were covered in this chapter, look at the following resources:

Encrypting data in different stages– Cloud Security Fundamentals

Data can be classified into different stages based on its level of activity or usage. The three main stages of data are data at rest, data in transit, and data in use. Encryption is a crucial technique that’s used to protect data in these states:

  • Data at rest: Data at rest refers to data that is stored on storage devices, such as hard drives, databases, or cloud servers, when it is not actively in use or being transmitted. Encryption at rest ensures that even if someone gains physical or unauthorized access to the storage medium, they won’t be able to read or understand the data without the appropriate decryption key. For example, when you store sensitive files on your computer’s hard drive, encrypting the files will protect them from unauthorized access if your device is lost or stolen.
  • Data in transit: Data in transit refers to data that is being transmitted over networks between different devices or systems. Encryption in transit ensures that data is secured while it is moving from one location to another, preventing interception or eavesdropping by unauthorized parties. Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocols are commonly used for encrypting data during its transmission over the internet. For example, when you access a website using HTTPS, the data that’s exchanged between your browser and the website’s server is encrypted in transit.
  • Data in use: Data in use refers to data that is actively being processed or accessed by an application or user. Encryption at this stage involves protecting the data while it is being used to prevent unauthorized access or disclosure. This can be achieved using techniques such as memory encryption or secure enclaves. For example, when you open a password-protected document, the data in the document is decrypted in memory for you to view and edit it. When you close the document or log out, the data is encrypted back in memory to protect it from potential unauthorized access.

Now that we have briefly covered encryption, let’s understand the importance of encryption in the context of a cloud environment.

Importance of encryption for a multi-cloud hybrid environment

The importance of encryption in securing the cloud cannot be overstated. Encryption plays a vital role in ensuring the confidentiality, integrity, and privacy of sensitive data and communication within cloud environments. Here’s why encryption is essential for cloud security:

  • Data confidentiality: Encryption ensures that sensitive data stored in the cloud remains unreadable to unauthorized parties. Even if a security breach occurs, encrypted data appears as ciphertext, protecting it from exposure and misuse.
  • Secure communication: When data is transmitted between cloud services and users, encryption guarantees secure communication. It prevents interception and eavesdropping, ensuring that sensitive information remains private during transit.
  • Data integrity: Cryptographic techniques, such as digital signatures and hash functions, verify data integrity in the cloud. This prevents unauthorized modification or data tampering, maintaining its accuracy and reliability.
  • Access control: Encryption enables robust access control in the cloud. By encrypting data and managing cryptographic keys effectively, cloud providers can enforce access restrictions, ensuring that data is accessible only to authorized personnel.
  • Regulatory compliance: Many industries are subject to data protection regulations that require the use of strong cryptographic measures. By employing encryption, cloud providers can comply with these regulations and safeguard sensitive data.
  • User authentication: Cryptographic mechanisms such as digital certificates and public key infrastructure (PKI) facilitate secure user authentication in the cloud. This ensures that users and services are legitimate and authorized to access cloud resources.
  • Key management: Cloud environments involve managing a vast number of cryptographic keys for different purposes. Proper key management is essential for maintaining the security of encrypted data and protecting against unauthorized access.
  • Multi-tenancy security: In a cloud environment, multiple users and organizations share the same infrastructure. Cryptography helps ensure that data from different tenants remains isolated and inaccessible to others, even if they share the same physical resources.
  • Data residency and sovereignty: Encryption helps maintain data residency and sovereignty. Data can be encrypted in such a way that it remains unreadable to unauthorized entities, even if it’s stored in different jurisdictions or countries.
  • Data sharing and collaboration: With encryption, cloud users can securely share and collaborate on sensitive data with other authorized users or organizations without the risk of exposing the data to unauthorized parties.

Overall, encryption provides a critical layer of protection for cloud data and services.

Now, let’s understand how encryption is achieved in cloud environments.

Why is it important to maintain confidentiality, integrity, and availability? – Cloud Security Fundamentals

Cybersecurity professionals and cybercriminals work on the same strategy; the former works to develop the strategy to protect the confidentiality, integrity, and availability of a system, while the latter put all their effort to disrupt it. Maintaining the CIA triad is crucial because it serves as a comprehensive framework for addressing and balancing critical aspects of information security. Here is why it is essential to maintain the CIA triad:

  • Comprehensive security: The CIA triad covers three fundamental dimensions of information security. By considering all three aspects, organizations can ensure a holistic approach to protecting their data and systems from a wide range of threats.
  • Risk management: The triad helps organizations identify and prioritize potential risks. By understanding the vulnerabilities associated with confidentiality, integrity, and availability, they can implement appropriate security measures to mitigate these risks effectively.
  • Compliance and regulations: Many laws and industry regulations mandate the protection of sensitive data and information. Adhering to the CIA triad assists organizations in complying with these legal requirements and demonstrating due diligence in safeguarding information.
  • Trust and reputation: Maintaining the CIA triad instills confidence and trust among stakeholders, customers, and partners. Organizations that prioritize security and protect information gain a reputation for being reliable and trustworthy.
  • Business continuity: Ensuring availability through the CIA triad helps organizations maintain operations even in the face of disruptions or attacks, thus safeguarding business continuity and reducing the impact of potential downtime.
  • Intellectual property protection: The triad’s integrity aspect is particularly vital for safeguarding intellectual property, trade secrets, and proprietary information. Maintaining data integrity prevents unauthorized changes or theft of valuable assets.
  • Incident response and recovery: The CIA triad aids in developing effective incident response and recovery plans. Understanding how confidentiality, integrity, and availability may be compromised allows organizations to respond swiftly and appropriately to security incidents.
  • Defense against evolving threats: As cybersecurity threats continue to evolve, the CIA triad remains a fundamental principle for guiding security strategies. By continually assessing and adapting security measures, organizations can stay ahead of emerging threats.
  • Competitive advantage: Demonstrating a strong commitment to the CIA triad can become a competitive advantage. Organizations that effectively protect their data and systems may gain a competitive edge by inspiring trust and attracting security-conscious customers and partners.
  • Proactive security culture: The CIA triad encourages organizations to cultivate a security-focused culture. By embedding security principles into their practices, employees become more aware of their role in protecting information and are better prepared to respond to security challenges.

In short, maintaining the CIA triad is vital for establishing a robust and resilient information security foundation. It helps organizations protect sensitive data, maintain business continuity, comply with regulations, and build trust among stakeholders, ultimately contributing to their overall success and longevity. Now, let us understand how organizations can maintain the CIA triad.

Security products and strategies at different layers – Cloud Security Fundamentals

Let us take a closer look at what security products and strategies are appropriate and applied at different layers:

  • Physical security: Physical security controls are an important part of DiD as they help protect an organization’s physical assets, such as its buildings, servers, and other infrastructure. Here are some examples of physical security controls that are applied in the same way:
    • Perimeter security: Perimeter security controls are used to control access to the organization’s property. Examples include fences, walls, gates, and barriers.
    • Access control: Access control measures are used to control who has access to the organization’s physical assets. Examples include ID badges, security guards, and biometric authentication systems.
    • Surveillance: Surveillance measures are used to monitor the organization’s physical assets for potential security threats. Examples include CCTV cameras, motion detectors, and security patrols.
    • Environmental controls: Environmental controls are used to protect the organization’s physical assets from damage caused by environmental factors such as fire, water, and temperature. Examples include fire suppression systems, water leak detection systems, and temperature control systems.
    • Redundancy: Redundancy measures are used to ensure that the organization’s physical assets remain operational even in the event of failure. Examples include backup generators, redundant HVAC systems, and redundant network connections.
  • Identity and access: This implements security controls such as MFA, condition-based access, attribute-based access control (ABAC), and role-based access control (RBAC) to protect infrastructure and change control.
  • Perimeter: A protection mechanism that is used across your corporate network to filter large-scale attacks such as DDoS so that the resources are not exhausted, causing a denial of service.
  • Network: Security techniques such as network segmentation and network access control are used to segregate different resources together and to limit communication between resources to prevent lateral movement.
  • Compute: This involves limiting access to VM from limited/whitelisted IPs only and also restricting certain ports and opening only the required ones.
  • Applications: Four primary techniques can be used to secure applications, each with its strengths and weaknesses. Let us take a look:
    • Runtime Application Self-Protection (RASP): RASP is an application security technology that is designed to detect and prevent attacks at runtime. RASP integrates with the application runtime environment and monitors the behavior of the application to identify potential threats. RASP can detect attacks such as SQL injection, cross-site scripting (XSS), and buffer overflow attacks, and can take action to block the attack or alert security personnel.
    • Interactive Application Security Testing (IAST): IAST is an application security testing technique that combines aspects of both SAST and DAST. IAST is a real-time security testing technology that provides feedback on vulnerabilities during the testing process. IAST can detect vulnerabilities such as SQL injection and XSS attacks by monitoring the application during testing.
    • Static Application Security Testing (SAST): SAST is an application security testing technique that analyzes the application’s source code for security vulnerabilities. SAST can identify vulnerabilities such as buffer overflows, SQL injection, and XSS attacks. SAST is typically run during the development process and can help developers identify and fix vulnerabilities before the application is deployed.
    • Dynamic Application Security Testing (DAST): DAST is an application security testing technique that analyzes the application while it is running. DAST can identify vulnerabilities such as SQL injection, XSS attacks, broken authentication, and session management. DAST is typically run after the application is deployed to identify vulnerabilities that may have been missed during the development process.

Overall, these techniques can be used in combination to provide a comprehensive approach to securing applications. Each technique has its strengths and weaknesses, and the choice of which technique to use depends on the specific needs of the organization and the application being secured.

  • Data: RBAC and ABAC are both access control models that are used to enforce data security:
    • In an RBAC model, access to resources is granted based on the user’s role or job function within an organization. This means that users are assigned specific roles, and those roles are granted permission to access specific resources. For example, an administrator role might be granted full access to a system, while a regular user role might only be granted access to certain parts of the system.
    • In an ABAC model, access to resources is granted based on a combination of attributes, such as the user’s job function, location, and time of day. This means that access control policies can be more flexible and granular than in an RBAC model. For example, a policy might be created to grant access to a resource only if the user is accessing it from a specific location and during specific hours.

Both RBAC and ABAC can be used to enforce data security by ensuring that only authorized users are granted access to sensitive data. Which model to use depends on the specific needs of the organization and the level of granularity and flexibility required for access control policies.

At this point, you should have a clear and baseline understanding of DiD. Now, let’s try understanding a benchmark model in information security famously known as the confidentiality, integrity, availability (CIA) triad.

Division of responsibility – Cloud Security Fundamentals

Let us understand how the division of responsibilities varies from one service model to another:

  • On-premises data centers: In an on-premises infrastructure (hardware and software), the customer is responsible for everything, from the physical security of data centers to the encryption of sensitive data.
  • IaaS: Virtual machines as services, which are offered by cloud providers such as Azure VM, AWS EC2, and Google Compute Engine, can be taken as examples of IaaS. If a customer decides to use VMs in the cloud, the cloud provider is responsible for the security of the physical data center, physical network, and physical host where the VM is hosted. As per Figure 1.4, security to the operating system (vulnerabilities and patches), network controls, applications hosted in the VM, identity and directory infrastructure, devices through which VMs are accessed, and information and data in the VM are all the customer’s responsibility.
  • PaaS: A wide range of services is offered by cloud providers under the PaaS category. Azure Web App, Logic Apps, Azure Functions, Azure SQL, Azure Service Bus, AWS Lambda, AWS Elastic Beanstalk, and Google App Engine are a few services under the PaaS category. As the service name suggests, PaaS provides an environment for building, testing, and deploying software applications. The most useful benefit of PaaS for its customer is that it helps create an application quickly without the need to manage the underlying infrastructure, such as hardware and operating systems. This becomes easy for customers as they are only responsible for securing the application and data.
  • SaaS: SaaS is a readymade, subscription-based application made available by cloud providers for its customers. Microsoft 365, Skype, Google Workspace, ERP, Amazon Chime, Amazon WorkDocs, and Dynamics CRM are some common examples of SaaS offerings. Out of all the service offerings, SaaS requires the least security responsibility from customers. The cloud provider is responsible for everything except data, identity access, accounts, and devices.

Important note

No matter which service is availed by the customer, the responsibility to protect accounts and identity, devices (mobile and PCs), and data is always retained by the customer.

The shared responsibility model is one of the most important topics to understand in the cloud security domain. Now that you understand it, let us understand another important topic – defense in depth.

Defense in depth

Defense in depth (DiD) is a cybersecurity strategy that uses a layered security approach to protect organizations’ critical assets from cyber criminals by utilizing a series of security measures to slow the advance of an attack. This was originally inspired by the military strategy, where each layer provides protection so that if one layer is breached, a subsequent layer will prevent an attacker from getting unauthorized access to data.

Security concerns with the public cloud – Cloud Security Fundamentals

There are several overriding concerns associated with cloud computing that organizations should be aware of:

  • Unauthorized access: Public cloud services can be vulnerable to unauthorized access, which can lead to data breaches and the exposure of sensitive information.
  • Insider threats: Cloud providers have access to users’ data, which means that insider threats can pose a risk to security.
  • Data loss: Public cloud services can suffer from data loss, which can occur due to hardware failures or other technical issues:

Figure 1.2 – Cloud security concerns

  • Compliance issues: Public cloud services may not always meet regulatory and compliance requirements for data storage and security.
  • Multi-tenancy risks: Public cloud services are often multi-tenant, which means that multiple users share the same physical infrastructure. This can increase the risk of data leakage or unauthorized access if they’re not managed properly.
  • Vulnerabilities in third-party tools: Public cloud services often rely on third-party tools and vendors, which can create vulnerabilities if these vendors are not properly vetted or have weak security measures in place.
  • Lack of control: Public cloud services are managed by the cloud provider, which means that users have limited control over the security measures that are implemented.
  • DDoS attacks: Public cloud services can be vulnerable to distributed denial of service (DDoS) attacks, which can disrupt service availability.
  • Data breaches through APIs: Public cloud services often use APIs to enable integration with other systems, which can create vulnerabilities if these APIs are not secured properly.
  • Data exposure through misconfigured services: Public cloud services can be vulnerable to data exposure if services are misconfigured, or access controls have not been set up properly.

It is important to understand these risks and take appropriate measures to mitigate them, such as implementing strong authentication and access controls, regularly monitoring and auditing activity, and using encryption to protect sensitive data. It is also important to work with reputable cloud providers who have a strong track record for security and compliance, be aware of the overriding concerns, and take steps to mitigate these risks through careful planning, risk assessment, and ongoing monitoring and management.

Now that you understand cloud computing and the security concerns around it, let us learn about the shared responsibility model.

Cloud computing service model – Cloud Security Fundamentals

Cloud service models are different types of cloud computing services that are provided by CSPs to customers or users. There are three main types of cloud service models:

  • Infrastructure-as-a-Service (IaaS): In this service model, the CSP provides the infrastructure or computing resources such as servers, storage, and networking, which can be used by customers to build and manage their applications or services. The customer has control over the operating system, applications, and security, while the CSP is responsible for the underlying infrastructure.
  • Platform-as-a-Service (PaaS): In this service model, the CSP provides a platform for customers to develop, run, and manage their applications without the need to manage the underlying infrastructure. The customer can focus on building and deploying their applications while the CSP takes care of the infrastructure, operating system, and middleware.
  • Software-as-a-Service (SaaS): In this service model, the CSP provides a complete software application or service that can be accessed and used by customers over the internet. The customer does not need to install or manage the software as it is provided by the CSP as a service. Examples of SaaS include email, online storage, and customer relationship management (CRM) software.

In simple terms, cloud service models are different types of cloud computing services that are provided by CSPs to customers. These services can range from providing infrastructure resources to complete software applications, with varying degrees of control and management by the customer.

Next, let us talk about cloud security.

What is cloud security?

Cloud security refers to the set of practices, technologies, policies, and measures designed to safeguard data, applications, and infrastructure in cloud environments. Security in clouds is crucial because it addresses the unique security challenges and risks associated with cloud computing, which includes services such as IaaS, PaaS, and SaaS.

Important note

Gartner reports (https://www.gartner.com/en/newsroom/press-releases/2021-11-10-gartner-says-cloud-will-be-the-centerpiece-of-new-digital-experiences) that 99% of cloud breaches are traced back to preventable misconfigurations or mistakes by cloud customers.

It is evident that cloud computing services bring some overriding concerns too, and most of them can be prevented if they are configured correctly. This includes network and system misconfigurations, IAM misconfigurations, and accidental exposure of resources. We will read more about major configuration risks in Chapter 11, but some of them are explained in the following subsection.

The shared responsibility model – Cloud Security Fundamentals

Cloud security is a tricky area. There are many myths about securing the cloud. Some think that once you have moved to the cloud, it is the cloud provider’s responsibility to protect everything in the cloud, while others think that nothing is secure in the cloud and it is not safe to move to the cloud, especially when you are dealing with sensitive data. The fact is security and compliance in the cloud is a shared responsibility between cloud providers and cloud customers.

This brings a lot of questions to our minds. Who is responsible for what? How do you define the responsibility matrix between cloud providers and customers? Who defines those responsibilities and on what basis?

Let us understand this with a simple and fun analogy of a Pizza-as-a-Service model. The cloud’s shared responsibility model can be explained using the analogy of ordering pizza in different ways: making it at home, ordering a Take and Bake pizza, ordering a pizza for delivery, or dining out at a restaurant:

Figure 1.3 – Pizza-as-a-Service model

  • Making pizza at home is like managing your IT infrastructure. You are responsible for everything, including buying the ingredients (hardware and software), preparing the dough and toppings (setting up the infrastructure and applications), cooking the pizza (maintaining the infrastructure), and cleaning up afterward (managing security, backups, and disaster recovery).
  • Ordering a Take and Bake pizza is like using IaaS. You order the pizza with the toppings you want, but the pizza is not cooked yet. You must take it home and cook it yourself. Similarly, with IaaS, you are provided with a virtual infrastructure that you configure and manage yourself, including installing and configuring the operating system, middleware, and applications.
  • Ordering a pizza for delivery is like using PaaS. You order the pizza with the toppings you want, and it is delivered to you fully cooked. You do not have to worry about the cooking process, but you still have control over the toppings. Similarly, with PaaS, you are provided with a platform for developing and deploying applications, and the CSP takes care of the underlying infrastructure.
  • Dining out at a restaurant is like using SaaS. You order the pizza, and it is delivered to you fully cooked and ready to eat. You do not have to worry about cooking or toppings as the restaurant takes care of everything. Similarly, with SaaS, you use a cloud-based application that is fully managed by the cloud service provider, and you do not have to worry about the underlying infrastructure, security, or backups.

In all these scenarios, the shared responsibility model applies. You, as the customer, are responsible for selecting the pizza toppings you want, just as you are responsible for configuring and securing your data and applications in the cloud. The cloud service provider is responsible for providing a secure and reliable environment for your data and applications, just as the restaurant is responsible for providing a clean and safe dining experience.

Now that you have understood shared responsibility via an interesting analogy, let’s understand the concept with the help of an actual responsibility model provided by every cloud provider for their customers. This responsibility is also known as security of the cloud versus security in the cloud:

Figure 1.4 – Shared responsibility model

Let us quickly discuss what security of the cloud and security in the cloud mean:

  • Security of the cloud: Security of the cloud means protecting the infrastructure that runs all the services offered by the cloud provider, which is composed of the hardware, software, networking, and facilities that public cloud services use. Cloud providers are responsible for the security of the cloud, which includes protecting the cloud environment against any security threats.
  • Security in the cloud: This refers to the responsibility held by customers and is solely determined by the cloud services that customers choose for consumption and where those workloads are hosted, such as IaaS, PaaS, SaaS, Database-as-a-Service (DBaaS), Container-as-a-Service (CaaS), or even Security-as-a-Service (SECaaS).

Customers must carefully consider the services they choose from different providers as their responsibilities vary depending on the services they use, the integration of those services into their IT environment, and applicable laws and regulations.

The responsibility model makes responsibility clear. When an organization does not have a cloud footprint, the organization is 100% responsible for the security and compliance of the infrastructure. When an organization moves to the cloud in a hybrid or cloud-native setup, the responsibility is shared between both parties.

Defense in depth guiding principle – Cloud Security Fundamentals

The guiding principle of DiD is the idea that a single security product will not ensure the safety of critical data. Implementing multiple security controls at distinct levels reduces the chance of breaches caused by external or internal threats. The following diagram depicts the concept of the DiD layer. This approach is designed to provide a layered defense that can stop attackers at multiple points in the attack chain, rather than having to rely on a single point of failure:

Figure 1.5 – Defense in depth (http://3.bp.blogspot.com/-YNJp1PXeV0o/UjpD7j1-31I/AAAAAAAADJE/O_6COIge7CA/s1600/TechnetDinD.jpg)

The guiding principle of DiD is a strategy that is used to provide multiple layers of protection for a system or organization. Some important security practices that are used in DiD are as follows:

  • Least-privilege: Least-privilege access is the practice of granting just enough access to the user so that they can perform their designated task in the organization and restrict their access to all other resources and systems. Limiting permissions on a user’s identity helps minimize risk in case credentials are compromised and an unauthorized user attempts to access sensitive data.
  • Multi-factor authentication (MFA): This is a security mechanism that requires users to provide two or more factors of authentication to access a system or application. This approach adds an extra layer of security to the authentication process, making it more difficult for attackers to gain unauthorized access. They can use either software or hardware tokens to provide an additional layer of security beyond a user’s password:
    • Software tokens are typically generated by a mobile app or software program. Once the user has entered their username and password, they are prompted to enter a one-time code generated by the app or software. This code is typically valid for only a short period and changes frequently, making it difficult for attackers to intercept and reuse.
    • Hardware tokens, on the other hand, are physical devices that generate one-time codes that the user must enter to complete the authentication process. These tokens may be in the form of key fobs, smart cards, or USB devices. The user inserts the hardware token into a device or presses a button to generate a code, which they then enter into the system or application being accessed.

Both software and hardware tokens provide an additional layer of security by requiring something in addition to the user’s password to gain access to a system or application. However, hardware tokens are generally considered more secure as they are not susceptible to attacks that can compromise software-based tokens, such as malware or phishing attacks. They also require physical possession of the token, making it more difficult for attackers to gain access, even if they have compromised the user’s password.

  • Network segmentation: This is the practice of dividing computer networks into smaller parts to limit the exposure of internal systems and data to vendors, contractors, and other outside or inside users. This also helps the security team protect sensitive data from insider threats, limit the spread of malware, and comply with data regulations.
  • Intrusion detection and prevention: Intrusion detection and prevention systems can be used to detect and prevent attacks on a system or network. These systems can be configured to alert security personnel or take automated action when an attack is detected.
  • Security training: Providing security awareness training to employees is an important security practice to ensure that they understand the importance of security and are aware of common threats and attack vectors.

These are just a few examples of the security practices that are part of DiD. Implementing these practices in a comprehensive and layered approach can help improve the overall security of an organization.

The Zero Trust model – Cloud Security Fundamentals

With exponential growth in cloud technology and the mobile workforce, the corporate network perimeter has been redefined. The traditional perimeter-based security approach is found to be ineffective as the resources are hosted in multi-cloud and hybrid scenarios. Today, organizations need a new security model that can provide secure access to their resources, irrespective of where they are accessed from and regardless of user or application environment. A Zero Trust security model helps in embracing the mobile workplace and helps in protecting identities, devices, apps, and data wherever they are located.

The Zero Trust model operates on the principle of “trust no one, verify everything, every time.” This means that all users, devices, applications, and data that flow within an organization’s network should be verified explicitly before being granted access to resources:

Figure 1.8 – The Zero Trust model (https://www.itgovernance.co.uk/blog/wp-content/uploads/2015/07/PPT-Diagram-Blog.png)

Zero Trust guiding principles

The Zero Trust model has three principles based on NIST guidelines:

  • Verify explicitly: The “verify explicitly” principle of Zero Trust means that access should be granted only after a user or device’s identity and security posture have been verified and authenticated. This requires the use of strong authentication mechanisms, such as MFA, that require users to provide additional forms of authentication beyond just a password, such as a fingerprint scan, facial recognition, or a one-time code. In the case of devices, they must be assessed and verified before they are granted access to resources within an organization’s network. This involves evaluating the device’s security posture to ensure that it meets a minimum set of security standards, such as having the latest security patches, running up-to-date antivirus software, and having strong passwords or other authentication mechanisms in place. Devices that do not meet these security standards are either denied access or granted limited access until they can be remediated and brought up to the required security standards.
  • Least privilege access: Least privilege access refers to Just-in-Time (JIT) access, which means elevating the permission as and when required to perform some tasks and then bringing back the default access with Just Enough Administration (JEA) to perform day-to-day tasks.
  • Minimize the blast radius: This refers to the assume breach mindset, where you build your defense while keeping the worst-case scenario in mind so that even if some external or internal breach occurs, there is a minimum impact on the organization. Network segmentation, end-to-end encryption, advanced threat detection, and deeper analytics visibility are some practices to minimize the blast radius.

These guiding principles help us in understanding the baseline on which we define the conditions for the Zero Trust model. Now, let’s understand which guidelines apply to which pillars.