The core components of a landing zone– Cloud Security Fundamentals

The primary goal of a landing zone is to ensure consistent deployment and governance across various environments, such as production (Prod), quality assurance (QA), user acceptance testing (UAT), and development (Dev). Let us understand the core concepts associated with landing zones:

  • Network segmentation: Network segmentation is a critical aspect of a landing zone architecture, and it involves dividing the cloud environment into distinct network segments to ensure isolation and security between different environments and workloads. Each environment (Prod, QA, UAT, and Dev) has a dedicated network segment. These segments are logically separated to prevent unauthorized access between environments. Network segmentation ensures that activities in one environment do not impact others and that sensitive data is adequately protected.
  • Isolation of environments: The network segments for each environment are isolated from each other to minimize the risk of data breaches or unauthorized access. This can be achieved through various means, such as Virtual Private Clouds (VPCs) in AWS, Virtual Networks (VNets) in Azure, or VPCs in GCP.
  • Connectivity between environments: While isolation is crucial, there are specific scenarios where controlled connectivity is required between environments, such as data migration or application integration. This connectivity should be strictly controlled and monitored to avoid security risks.
  • Identity and access management (IAM): IAM policies and roles are implemented to regulate access to cloud resources within each environment. This ensures that only authorized users have access to specific resources based on their roles and responsibilities.
  • Security measures: Each landing zone environment should have security measures, including firewall rules, security groups, network access control lists (NACLs), and other security-related settings. This helps safeguard resources and data from potential threats.
  • Centralized governance: A landing zone architecture also implements centralized governance and monitoring to maintain consistency, compliance, and visibility across all environments. This involves using a central management account or a shared services account for common services.
  • Resource isolation: Within each environment, further resource isolation can be achieved by using resource groups (Azure), projects (GCP), or organizational units (AWS) to logically group resources and manage access control more effectively.
  • Monitoring and auditing: To maintain the health and security of the landing zone, comprehensive monitoring and auditing practices should be implemented. This includes monitoring for suspicious activities, resource utilization, and compliance adherence.

Overall, a landing zone architecture provides a solid foundation for an organization’s cloud deployment by enforcing security, governance, and network segmentation across different environments. This architecture is cloud provider-agnostic and can be adapted to various cloud platforms such as Azure, AWS, and GCP while following their respective best practices and services. To read more about it, you can search for Cloud Adoption Framework, followed by the cloud provider’s name, via your favorite search engine – you will get plenty of resources.

Summary

Cloud security is an interesting topic and fun to learn. I hope you enjoyed it as much as I enjoyed writing some of these fundamental concepts. In this chapter, we introduced you to some important security and compliance concepts. This included shared responsibility in cloud security, encryption and its relevance in a cloud environment, compliance concepts, the Zero Trust model and its foundational pillars, and some of the most important topics related to cryptography. Finally, you were introduced to CAF and landing zones. All the terms and concepts discussed in this chapter will be referred to throughout this book. I encourage you to deep dive into these topics as much as you can.

In the next chapter, we will learn about cloud security posture management (CSPM) and the important concepts around it. Happy learning!

Further reading

To learn more about the topics that were covered in this chapter, look at the following resources:

Encryption in cloud environments– Cloud Security Fundamentals

In cloud environments, responsibility for encryption is typically shared between the cloud service provider and the customer. The cloud service provider is responsible for providing the underlying infrastructure and tools to enable encryption, while the customer is responsible for implementing encryption practices for their data and managing access to the encryption keys.

Encryption in a cloud environment can be achieved through a multi-step process that involves various responsibilities and tools. A cloud customer must understand these points. Let’s break down the process:

  1. Data classification and encryption strategy: The customer is responsible for classifying their data based on sensitivity and compliance requirements. They need to determine what data needs to be encrypted and what encryption algorithms to use. No specific tool is involved in this step. It’s more of a policy and decision-making process.
  2. Data encryption: The customer is responsible for encrypting their data before sending it to the cloud or storing it in the cloud service. Various encryption libraries and tools are available for data encryption, such as OpenSSL and HashiCorp Vault, as well as cloud provider-specific encryption via a software development kit (SDK).
  3. Key generation and management: The cloud service provider is responsible for providing a Key Management Service (KMS) that allows customers to create and manage encryption keys securely. Cloud service providers offer their own KMSs, including AWS KMS, Azure Key Vault, and Google Cloud KMS.
  4. Customer Master Key (CMK) creation and protection: The customer is responsible for creating and managing their CMKs within the cloud provider’s KMS. CMKs are used to protect and control access to data encryption keys. The KMS provided by the cloud service provider is used to create and manage CMKs.
  5. Data upload and storage: The cloud service provider is responsible for securely receiving and storing encrypted data. No specific tool is involved here. The cloud provider’s storage infrastructure handles the encrypted data.
  6. Data retrieval and decryption: The customer is responsible for retrieving the encrypted data from the cloud and decrypting it using the appropriate Data Encryption Key (DEK), which is decrypted using the CMK. The decryption process is performed using encryption libraries or tools, along with the cloud provider’s KMS to retrieve and use the necessary keys.
  7. Key rotation and life cycle management: The customer is responsible for regularly rotating encryption keys and managing their life cycle to minimize the risk of unauthorized access. The cloud provider’s KMS offers APIs and tools to facilitate key rotation and life cycle management.
  8. Monitoring and auditing: Both the cloud service provider and the customer share the responsibility of monitoring and auditing encryption-related activities to detect and respond to security incidents or unauthorized access. CSPM tools provide the visibility of risk associated with keys.

In summary, encryption in the cloud involves collaboration between the cloud service provider and the customer. The customer is responsible for data classification, encryption, key management, and data decryption, while the cloud provider is responsible for providing a secure KMS and ensuring the secure storage and retrieval of encrypted data. Various encryption libraries, KMSs, and CSPM tools play crucial roles in achieving a robust encryption process in the cloud environment.

Now that you have a fundamental understanding of encryption and its relevance in cloud environments, let us understand another important topic: the Cloud Adoption Framework (CAF). This is one of the most important topics for organizations planning to adopt the cloud for their infrastructure.

How do organizations ensure confidentiality, integrity, and availability? – Cloud Security Fundamentals

Finding and maintaining the right balance of the CIA triad is challenging due to the diverse threat landscape, competing priorities, the complexity of IT systems, human factors, budget constraints, regulatory compliance, rapid technological advancements, and data sharing complexities. Organizations must proactively assess risks, prioritize assets, implement multi-layered (DiD) security strategies, and adapt to emerging threats. Collaboration among stakeholders is crucial for achieving a robust and effective security posture. It also requires a holistic approach to security and continual efforts to stay ahead of evolving security challenges. Organizations employ a combination of technical, administrative, and physical security measures to strike the right balance. Here are some common practices:

  • Confidentiality:
    • Access controls: Implementing RBAC to ensure that only authorized individuals have access to sensitive data and information.
    • Encryption: Encrypting data during transmission (for example, using SSL/TLS for web traffic) and at rest (for example, encrypting data in databases or on storage devices) to protect against unauthorized access
    • Secure Authentication: Using strong authentication methods such as passwords, MFA, or biometrics to verify the identity of users.
  • Integrity:
    • Data validation: Implementing validation checks to ensure that data is accurate, complete, and free from errors when it is entered into systems.
    • Audit trails: Creating logs and audit trails to track changes made to data and detect any unauthorized modifications.
    • Version control: Using version control mechanisms for critical documents to track changes and prevent unauthorized alterations.
  • Availability:
    • Redundancy: Implementing redundant systems and infrastructure to ensure high availability and fault tolerance. This includes redundant servers, network links, and power sources.
    • Load balancing: Using load balancing techniques to distribute traffic across multiple servers, preventing overload and ensuring continuous service availability.
    • Disaster recovery and business continuity planning: Developing comprehensive plans and procedures to recover from system failures, natural disasters, or other emergencies, thus minimizing downtime and maintaining service availability.

Additionally, organizations can achieve the CIA triad through various administrative practices and security policies:

  • Security awareness training: Conducting regular security awareness training for employees to educate them about security best practices, risks, and the importance of maintaining confidentiality, integrity, and availability
  • Risk assessment and management: Identifying potential security risks and vulnerabilities through risk assessments and implementing measures to mitigate those risks effectively
  • Incident response: Establishing incident response teams and procedures to quickly respond to and mitigate security incidents, ensuring the continuity of operations
  • Regular security audits: Conducting periodic security audits and assessments to evaluate the effectiveness of existing security measures and identify areas for improvement

Achieving the CIA triad is an ongoing process that requires continuous monitoring, updates to security measures, and adaptations to address emerging threats. Organizations must strike a balance between security requirements and business needs and implement appropriate security controls to safeguard their information, systems, and operations effectively.

Now, let us understand another important topic of cybersecurity – the three pillars.

The six foundational pillars – Cloud Security Fundamentals

The following are the six pillars of the Zero Trust model. They work together to provide overall robust security for your infrastructure:

  • Identities: Identities can refer to users, devices, or applications/services. It is important to verify and secure each identity with strong authentication across your entire digital estate. When an identity (user/device/service) attempts to access a resource, it must be verified with strong authentication and follow the least privilege principle.
  • Endpoints: These are the carriers through which data flows on-premises and in the cloud; hence, they are the reason for creating large attack surfaces in many cases. It is important to have the visibility of devices accessing the network and notice their activities. A device’s security posture and health, from a compliance perspective, is an important aspect of security.
  • Applications: Discovering the shadow IT and in-app permissions is critical because applications are the way organizations’ data is consumed. Not all applications’ access management is managed centrally, so it is important to put a stringent process for access reviews and privileged identity management (PIM) in place.
  • Data: Cloud computing services and offerings have completely changed the way data was managed traditionally, which resulted in perimeter-based whitelisting not being effective anymore in current hybrid/multi-cloud/SaaS-based systems. Many organizations do not have complete visibility of what kind of data they are dealing with, the most critical data, and where it resides in the organization. That is why it is important to discover, classify, label, and encrypt data intelligently based on its attributes. The whole effort is to protect the organization’s critical data and ensure that data is safe from both internal and external threats. This is critical especially when data leaves devices, applications, infrastructure, and the network controlled by the organization.
  • Infrastructure: Threats and attack vectors are very much a reality, whether they are on-premises or in the cloud. You can use intelligence-based telemetries such as JIT access, location, devices, and version to detect anomalies and attacks for ensuring security. This helps allow/block or automatically take action for any risky behavior almost at runtime, such as continuous failed login attempts.
  • Networks: To make this pillar stronger, it is important to ensure that the devices are not trusted by default, even if they are in a trusted network. Implementing end-to-end encryption, reducing the attack surface by policy, network segmentation, in-network micro-segmentation, and real-time threat detection are some of the critical practices to keep in place.

Implementing all six pillars strongly is extremely hard to achieve. It becomes even more challenging when organizations have an enormously complex and hybrid infrastructure where they do not include security as a priority at an early stage. Now, let’s understand the difference between security and compliance.

Cloud computing service model – Cloud Security Fundamentals

Cloud service models are different types of cloud computing services that are provided by CSPs to customers or users. There are three main types of cloud service models:

  • Infrastructure-as-a-Service (IaaS): In this service model, the CSP provides the infrastructure or computing resources such as servers, storage, and networking, which can be used by customers to build and manage their applications or services. The customer has control over the operating system, applications, and security, while the CSP is responsible for the underlying infrastructure.
  • Platform-as-a-Service (PaaS): In this service model, the CSP provides a platform for customers to develop, run, and manage their applications without the need to manage the underlying infrastructure. The customer can focus on building and deploying their applications while the CSP takes care of the infrastructure, operating system, and middleware.
  • Software-as-a-Service (SaaS): In this service model, the CSP provides a complete software application or service that can be accessed and used by customers over the internet. The customer does not need to install or manage the software as it is provided by the CSP as a service. Examples of SaaS include email, online storage, and customer relationship management (CRM) software.

In simple terms, cloud service models are different types of cloud computing services that are provided by CSPs to customers. These services can range from providing infrastructure resources to complete software applications, with varying degrees of control and management by the customer.

Next, let us talk about cloud security.

What is cloud security?

Cloud security refers to the set of practices, technologies, policies, and measures designed to safeguard data, applications, and infrastructure in cloud environments. Security in clouds is crucial because it addresses the unique security challenges and risks associated with cloud computing, which includes services such as IaaS, PaaS, and SaaS.

Important note

Gartner reports (https://www.gartner.com/en/newsroom/press-releases/2021-11-10-gartner-says-cloud-will-be-the-centerpiece-of-new-digital-experiences) that 99% of cloud breaches are traced back to preventable misconfigurations or mistakes by cloud customers.

It is evident that cloud computing services bring some overriding concerns too, and most of them can be prevented if they are configured correctly. This includes network and system misconfigurations, IAM misconfigurations, and accidental exposure of resources. We will read more about major configuration risks in Chapter 11, but some of them are explained in the following subsection.

Defense in depth guiding principle – Cloud Security Fundamentals

The guiding principle of DiD is the idea that a single security product will not ensure the safety of critical data. Implementing multiple security controls at distinct levels reduces the chance of breaches caused by external or internal threats. The following diagram depicts the concept of the DiD layer. This approach is designed to provide a layered defense that can stop attackers at multiple points in the attack chain, rather than having to rely on a single point of failure:

Figure 1.5 – Defense in depth (http://3.bp.blogspot.com/-YNJp1PXeV0o/UjpD7j1-31I/AAAAAAAADJE/O_6COIge7CA/s1600/TechnetDinD.jpg)

The guiding principle of DiD is a strategy that is used to provide multiple layers of protection for a system or organization. Some important security practices that are used in DiD are as follows:

  • Least-privilege: Least-privilege access is the practice of granting just enough access to the user so that they can perform their designated task in the organization and restrict their access to all other resources and systems. Limiting permissions on a user’s identity helps minimize risk in case credentials are compromised and an unauthorized user attempts to access sensitive data.
  • Multi-factor authentication (MFA): This is a security mechanism that requires users to provide two or more factors of authentication to access a system or application. This approach adds an extra layer of security to the authentication process, making it more difficult for attackers to gain unauthorized access. They can use either software or hardware tokens to provide an additional layer of security beyond a user’s password:
    • Software tokens are typically generated by a mobile app or software program. Once the user has entered their username and password, they are prompted to enter a one-time code generated by the app or software. This code is typically valid for only a short period and changes frequently, making it difficult for attackers to intercept and reuse.
    • Hardware tokens, on the other hand, are physical devices that generate one-time codes that the user must enter to complete the authentication process. These tokens may be in the form of key fobs, smart cards, or USB devices. The user inserts the hardware token into a device or presses a button to generate a code, which they then enter into the system or application being accessed.

Both software and hardware tokens provide an additional layer of security by requiring something in addition to the user’s password to gain access to a system or application. However, hardware tokens are generally considered more secure as they are not susceptible to attacks that can compromise software-based tokens, such as malware or phishing attacks. They also require physical possession of the token, making it more difficult for attackers to gain access, even if they have compromised the user’s password.

  • Network segmentation: This is the practice of dividing computer networks into smaller parts to limit the exposure of internal systems and data to vendors, contractors, and other outside or inside users. This also helps the security team protect sensitive data from insider threats, limit the spread of malware, and comply with data regulations.
  • Intrusion detection and prevention: Intrusion detection and prevention systems can be used to detect and prevent attacks on a system or network. These systems can be configured to alert security personnel or take automated action when an attack is detected.
  • Security training: Providing security awareness training to employees is an important security practice to ensure that they understand the importance of security and are aware of common threats and attack vectors.

These are just a few examples of the security practices that are part of DiD. Implementing these practices in a comprehensive and layered approach can help improve the overall security of an organization.

The CIA triad – Cloud Security Fundamentals

Not to be confused with the central intelligence agency of the same acronym, CIA stands for confidentiality, integrity, and availability. It is a widely popular information security model that helps an organization protect its sensitive critical information and assets from unauthorized access:

Figure 1.6 – The CIA triad (https://devopedia.org/images/article/178/8179.1558871715.png)

The preceding diagram depicts the CIA triad. Let’s understand its attributes in detail.

Confidentiality

Confidentiality ensures that sensitive information is kept private and accessible only to authorized individuals. This attribute focuses on keeping sensitive information private and accessible only to authorized individuals or entities. It aims to prevent unauthorized disclosure of information, protecting it from being accessed or viewed by unauthorized users. Let’s understand this by looking at an example of the payroll system of an organization. The confidentiality aspect of the payroll system ensures that employee salary information, tax details, and other sensitive financial data is kept private and accessible only to authorized personnel. Unauthorized access to such information can lead to privacy breaches, identity theft, or financial fraud.

Integrity

Integrity maintains the accuracy and trustworthiness of data by preventing unauthorized modifications. The integrity aspect ensures that information remains accurate, trustworthy, and unaltered. It safeguards against unauthorized modifications, deletions, or data tampering efforts, ensuring that the information’s integrity is maintained throughout its life cycle. Let’s understand integrity using the same example of the payroll system of an organization. The integrity aspect of the payroll system ensures that the data remains accurate and unchanged throughout its life cycle. Any unauthorized modifications to payroll data could lead to incorrect salary payments, tax discrepancies, or compliance issues.

Availability

Availability ensures that information and services are accessible and operational when needed without disruptions. This aspect emphasizes ensuring that information and systems are available and operational when needed. It focuses on preventing disruptions or denial of service, ensuring that authorized users can access the information and services they require without interruptions. Let’s understand availability by using the same example of the payroll system of an organization. The availability aspect of the payroll system ensures that it is accessible and functional when needed. Payroll processing is critical for employee satisfaction and business operations, and any disruptions to the system could result in delayed payments or other financial issues.

Overall, the CIA triad provides a framework for organizations to develop effective cybersecurity strategies. By focusing on confidentiality, integrity, and availability, organizations can ensure that their systems and data are protected from a wide range of threats, including cyberattacks, data breaches, and other security incidents.

Technical requirements – Cloud Security Fundamentals

In the age of digital innovation, cloud computing has become the backbone of modern business operations. The convenience, scalability, and cost-efficiency of the cloud have revolutionized how we store, process, and share data. As we embrace the cloud’s potential, we must also acknowledge the growing importance of cloud security. Protecting our digital assets from a range of threats is paramount in this interconnected world. Cloud security encompasses a wide range of concerns, including data protection, access control, compliance with regulatory requirements, and the overall integrity and confidentiality of information stored and processed in the cloud.

 This chapter focuses on building baseline understanding of cloud security, which means understanding the key principles and strategies that underpin our ability to operate securely in the cloud. You will learn about some of the most important topics of cloud security, such as the shared responsibility model, defense in depth, the Zero Trust model, compliance concepts in the cloud, and the Cloud Adoption Framework.

The following main topics are covered in this chapter:

  • What is cloud computing?
  • Exploring cloud security
  • The shared responsibility model
  • Defense in depth
  • The Zero Trust model
  • Compliance concepts
  • Cryptography and encryption in the cloud
  • The Cloud Adoption Framework

Let us get started!

Technical requirements

To get the most out of this chapter, you are expected to have the following:

  • A baseline understanding of cloud computing concepts.
  • A general understanding or experience of working in an IT environment. To have a better understanding, you can use the sandbox environment of the organization’s CSPM tool, if available.

What is cloud computing?

Cloud computing is a technology that allows organizations and individuals to access and use computing resources such as processing power, storage, and software over the internet without having to buy and maintain physical infrastructure. Cloud service providers (CSPs) such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and many other providers offer these services. Cloud offerings empower traditional IT offerings by adding many other services such as artificial intelligence (AI), machine learning (ML), Internet of Things (IoT), and security.

Cloud computing is a powerful technology for organizations of all sizes. Here are some of the key features of cloud computing:

  • Agility: Cloud computing allows organizations to rapidly deploy and scale computing resources up or down as needed, which means they can be more agile and respond quickly to changing business requirements. With cloud computing, businesses can avoid the time and expense of building and managing their IT infrastructure, allowing them to focus on developing and delivering their products and services.
  • Productivity: Cloud computing can improve productivity by providing access to computing resources and software from anywhere, on any device, and at any time. This flexibility allows employees to work remotely and collaborate more easily, which can lead to increased productivity and efficiency:

Figure 1.1 – Cloud computing

  • Resiliency: Cloud computing can improve resiliency by providing redundancy and failover options, which means that if one computing resource fails, others can take over seamlessly. This reduces the risk of downtime and improves the availability and reliability of applications and services.
  • FinOps: Cloud computing offers Financial Operations (FinOps) capabilities that allow organizations to manage and optimize their cloud spending. This includes tools for monitoring cloud usage, forecasting costs, and optimizing resource allocation to reduce costs and maximize value.
  • Pay-as-you-go model: Cloud computing is often priced on a pay-as-you-go basis, which means that organizations only pay for the computing resources they use. This allows businesses to avoid the capital expense of buying and maintaining their IT infrastructure, and instead, pay for computing resources as an operational expense.

In summary, cloud computing provides organizations with agility, productivity, resiliency, FinOps, and a pay-as-you-go model, making it an attractive option for businesses looking to optimize their IT operations and focus on delivering value to their customers.

Gartner estimates the following by 2025 (https://www.gartner.com/en/newsroom/press-releases/2021-11-10-gartner-says-cloud-will-be-the-centerpiece-of-new-digital-experiences):

  • More than 95% of new digital workloads will be deployed on cloud-native application platforms, up from 30% in 2021
  • 70% of the new applications developed by companies will use low-code or no-code technologies
  • More than 50% of organizations will have explicit strategies to adopt cloud-delivered Secure Access Service Edge (SASE), up from less than 5% in 2020
  • 85% of organizations will embrace cloud-first principles

While these fact-based estimations look very overwhelming, there is no doubt that the cloud provides extraordinary benefits to the data-driven business world.

SDDC deployment – Appendix: Preflight before Onboarding

When preparing for the deployment of your first SDDC, you need to collect the configuration data in advance. The settings ideally should be captured at the design stage, as discussed in the previous chapter.

The following table depicts the configuration items you need to provide to successfully deploy your first SDDC:

Configuration sectionConfiguration itemDescription
SDDC (see Figure 12.3 for details)NameFree text field. You can change the name after the deployment as well. It is recommended to use the company naming convention.
 AWS RegionAWS Region where your SDDC resides. The Region should fit your subscription, AWS VPC configuration, and AWS DX configuration (if in use).
 DeploymentSingle host – for POC only, for 60 days only. Multi-host – production deployment. Stretched cluster – a deployment across two AWS AZs.
 Host typeSelect one of the available host types. The host type should fit into your subscription, design, and workload requirements. You have a choice between: i3.metali3en.metalI4i.metal See Figure 12.4 for the deployment wizard where the host type is specified. VMware constantly adds new instances. Check the VMware documentation for the available instances.
 Number of hostsCount of ESXi hosts in your first cluster. If your design requires a multi-cluster setup, you will add additional clusters after the SDDC is provisioned with the first cluster.
AWS Connection (see Figure 12.2 for details)AWS accountThis is an AWS account you own. Choose the account according to the design and security requirements.
 Choose a VPCSelect an AWS VPC (the VPC should be precreated) in your AWS account. This VPC will become a connected VPC after the deployment.
 Choose subnet(s)Select a subnet in your VPC (the subnet must be precreated). The subnet must have enough free IPs for the SDDC deployment (to accommodate ESXi hosts’ ENI interfaces). The subnet also defines the destination AZ. You cannot change the subnet after the deployment. If you deploy a stretched cluster SDDC, you must select two subnets in two different AZs.
SDDC networkingProvide the management subnet CIDRYou should provide a private network subnet with enough IP addresses for the SDDC management (vCenter, ESXi hosts, vSAN network, etc.). It is recommended to use a /23 subnet if you plan to deploy more than 10 hosts. You cannot change the subnet after the deployment. Make sure the subnet does not overlap with the on-premises or other connected networks (including AWS).

Table 12.1 – SDDC Configuration Details

You can review the deployment wizard in Figure 12.3:

Figure 12.3 – SDDC deployment wizard SDDC Properties

You can review the VPC and subnet details of the SDDC wizard in Figure 12.4:

Figure 12.4 – SDDC deployment wizard. AWS VPC and subnet

After you have provisioned the SDDC, you must configure access to the vSphere Web Client to manage your SDDC through VMware vCenter Server. You will use the NSX manager UI to create a Management Gateway Firewall Rule. By default, access to vCenter is not allowed. You will specify an IP or a subnet and entitle it to access vCenter. An “allow all” rule is not possible.

Hybrid cloud configuration – Appendix: Preflight before Onboarding

If your design requires establishing a connection on-premises, several configuration changes have to be made to enforce the connection. If you also need to configure HCX for migration, it adds some complexity to the deployment. The following table lists the relevant configuration items to be considered for the hybrid cloud deployment:

Configuration sectionConfiguration itemDescription
Network configurationVPNPolicy-based or route-based. See the networking section in Chapter2 for more details on VPNs
 AWS DX (see Figure 12.5)You can choose to use the AWS DX service to gain predictable latency and possibly higher throughout for your workload. You can leverage the following: AWS DX provisioned as a private VIF to your SDDC.AWS DX VIF connected to an AWS DX Gateway (DXGW). You will use an SDDC group and a vTGW to connect your SDDC(s) to a DXGW.Cloud connector service providers – cloud connector service providers can offer an alternative by sharing cloud connectivity lines. From the SDDC perspective, the connection still would be in the form of a private VIF or a connection to a DXGW.
 Dynamic routing supportVMware Cloud on AWS supports only the BGP dynamic routing protocol. You can filter incoming/outcoming routes and/or announce 0.0.0.0./0 to route all SDDC traffic through the selected connection. If you have multiple connections from on-premises to the cloud, it is important to synchronize the routing information (e.g., avoid announcing 0.0.0.0/0 through DX and specific subnets through a route-based VPN)
SDDC managementvCenter ServerReconfigure to use a private IP
(see Figure 12.6)NSX managerReconfigure to use a private IP
 HCX managerReconfigure to use a private IP
FirewallManagement Gateway FirewallEnsure your on-premises CIDRs required access to vCenter/NSX Manager/HCX Manager is included in the management firewall rules.
 Compute Gateway FirewallEnsure you add on-premises CIDRs and map them to the DX/VPN interface.
Migration ServiceActivate HCXHCX Enterprise is included with VMware Cloud on AWS SDDC.
 Pair HCX managersConfigure a pairing between on-premises and the cloud. You can have multiple site pairs if needed.
 Configure a network profile. (See Figure 12.7.)Configure HCX on VMware Cloud on AWS to use the “directConnectNetwork1” network profile. Add a non-overlapping private CIDR (different from the SDDC management network). HCX will use this network to establish connectivity between the appliances. The SDDC workflow will automatically add the subnet to the BGP route distribution and create the required firewall rules.
 Create a service meshOverride the network uplink configuration to use the directConnectNetwork1 network profile while configuring the service mesh.
 Configure network extensionThe HCX network extension service can extend vSphere vDS VLAN-based port groups to the cloud. You can enable high availability for your NE appliances (you need to configure an HA group before extending a VLAN).
Migrate workloadsIdentify VMs to be migratedIdentify VMs building an application and migrate them as a part of the same migration group.
 Select migration typeSelect between the following: vMotionbulk migrationreplication-assisted vMotion (RAV) See Chapter 3, which covers HCX migrations in great depth for more details.
 Configure scheduleUse this option to define the switchover/start of vMotion. If using bulk or RAV, you need to make sure HCX has enough time to replicate virtual machine data.

Table 12.2 – Hybrid Cloud configuration details

You can review the Direct Connect configuration in Figure 12.5.

Figure 12.5 – AWS DX VIF attached to an SDDC

You can review the FQDN configuration in Figure 12.6:

Figure 12.6 – Configure vCenter Server, HCX, and NSX FQDN resolution

You can review the configuration of HCX to leverage AWS Direct Connect (DX) connection in Figure 12.7:

Figure 12.7 – VMware Cloud on AWS HCX network profile: uplink over AWS DX

Next steps

Now that you have completed the basic SDDC setup and connected the SDDC to on-premises, you can use the following list to get further information about the services and next steps: