Amazon Web Services (AWS) Overview
Adjust Technical Level
Select your expertise level to customize content
Amazon Web Services (AWS) is a comprehensive cloud computing platform offering infrastructure as a service (IaaS), platform as a service (PaaS), and packaged software as a service (SaaS) offerings. AWS provides a broad set of global compute, storage, database, analytics, application, and deployment services that help organizations move faster, lower IT costs, and scale applications. With data center locations in multiple geographic regions, AWS allows organizations to deploy resilient, fault-tolerant applications with global reach while maintaining data sovereignty and compliance requirements.
AWS Core Concepts
Technical Architecture
AWS Business Value
At its core, AWS provides a flexible, cost-effective way for businesses to access computing resources without the upfront investment and maintenance of physical infrastructure.
Key Business Benefits
- Cost Efficiency: Pay only for what you use, with no upfront costs or long-term commitments for most services.
- Scalability: Easily scale resources up or down based on demand, ensuring you have the right capacity at the right time.
- Agility: Launch new applications and services quickly without waiting for hardware procurement or setup.
- Global Reach: Deploy applications worldwide in minutes with AWS's global infrastructure.
- Security: Benefit from AWS's secure infrastructure and wide range of compliance certifications.
- Innovation: Access cutting-edge technologies like machine learning, IoT, and serverless computing without specialized expertise.
Business Model
- Consumption-Based Pricing: Most services are billed based on actual usage, allowing cost to scale with your business needs.
- Reserved Capacity Options: Commit to using certain resources for 1-3 years for significant discounts.
- Free Tier: Many services offer a free tier for learning and small workloads.
- Volume Discounts: Costs generally decrease as usage increases.
- Total Cost of Ownership (TCO): Often lower than traditional on-premises infrastructure when considering all costs.
Business Use Cases
- Startups: Launch with minimal upfront cost and scale as the business grows.
- Enterprises: Modernize IT infrastructure, improve operational efficiency, and accelerate innovation.
- Web Applications: Host websites and applications with high availability and global distribution.
- Data Analytics: Process and analyze large datasets without investing in specialized hardware.
- Disaster Recovery: Create cost-effective backup and recovery solutions across multiple geographic regions.
- Development and Testing: Create and tear down environments on demand without capital expense.
Business Perspective
AWS Technical Foundation
AWS is built on a distributed systems architecture with several key technical concepts:
Global Infrastructure
- Regions: Geographic areas containing multiple Availability Zones. Each Region is completely independent and isolated from other Regions.
- Availability Zones (AZs): Physically separate data centers within a Region, connected with low-latency links but isolated from failures in other AZs.
- Edge Locations: Points of presence used by CloudFront (CDN) and Route 53 (DNS) for content delivery and distribution.
- Local Zones: Infrastructure deployments that place compute, storage, and database services closer to large population and industry centers.
Service Models
- Infrastructure as a Service (IaaS): Provides virtualized computing resources (EC2, EBS, VPC).
- Platform as a Service (PaaS): Offers platforms for developing, running, and managing applications (Elastic Beanstalk, ECS, EKS).
- Software as a Service (SaaS): Delivers software applications over the internet (Amazon WorkMail, Amazon Connect).
- Function as a Service (FaaS): Allows running code without managing servers (Lambda).
Security and Identity
- Shared Responsibility Model: Divides security responsibility between AWS (security of the cloud) and customer (security in the cloud).
- IAM (Identity and Access Management): Controls authentication and authorization for AWS resources.
- Resource Policies: Define permissions directly on resources like S3 buckets or SQS queues.
- Security Groups: Act as virtual firewalls controlling traffic to EC2 instances.
- Network ACLs: Provide stateless packet filtering at the subnet level.
Deployment and Management
- AWS Management Console: Web-based interface to manage AWS resources.
- AWS CLI: Command-line tool for managing AWS services.
- AWS SDKs: Software development kits for various programming languages.
- Infrastructure as Code: Tools like CloudFormation, CDK, and Terraform for provisioning resources.
- API-Driven Architecture: All AWS services expose APIs for programmatic access.
AWS Service Categories
Legend
Components
Connection Types
Core AWS Services
This section details the most important and widely-used AWS services that form the foundation of most AWS deployments.
Compute Services
- EC2
- Lambda
- ECS/EKS
Amazon Elastic Compute Cloud (EC2)
Technical Implementation
EC2 provides flexible computing capacity that eliminates the need for upfront hardware investments and helps businesses address changing requirements:
- Cost Optimization: Choose the right instance types, purchasing options (On-Demand, Reserved, Savings Plans, Spot), and Auto Scaling to align costs with actual needs
- Rapid Deployment: Spin up new servers in minutes rather than weeks for procurement
- Global Reach: Deploy applications closer to users across multiple geographic regions
- Business Continuity: Distribute instances across Availability Zones for fault tolerance
- Development & Testing: Create development environments on demand and terminate when not needed
Cost Considerations:
- On-Demand instances are billed by the second with no commitments (best for variable workloads)
- Reserved Instances offer up to 72% savings with 1-3 year commitments (best for predictable workloads)
- Spot Instances provide up to 90% discounts but can be terminated with short notice (best for fault-tolerant workloads)
- Savings Plans offer flexible pricing model with 1-3 year commitments across multiple services
Business Value
Amazon EC2 provides resizable virtual servers (instances) in the cloud. Key technical aspects include:
- Instance Types: Organized by families optimized for different use cases (compute, memory, storage, GPU, etc.) and available in various sizes
- Amazon Machine Images (AMIs): Pre-configured templates containing OS and software
- Instance Store vs EBS: Ephemeral storage directly attached to the host server vs persistent block storage volumes
- Placement Groups: Control instance placement strategy (cluster, spread, partition)
- Auto Scaling: Automatically adjust capacity based on demand using scaling policies
- Spot Instances: Bid for unused EC2 capacity at steep discounts compared to On-Demand pricing
Common EC2 CLI Operations:
# Launch a new EC2 instance
aws ec2 run-instances --image-id ami-12345678 --instance-type t2.micro --key-name MyKeyPair
# Describe running instances
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running"
# Stop an instance
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
# Create an Auto Scaling group
aws autoscaling create-auto-scaling-group --auto-scaling-group-name my-asg \
--launch-configuration-name my-launch-config \
--min-size 1 --max-size 3 --desired-capacity 2 \
--availability-zones us-east-1a us-east-1b
AWS Lambda
Technical Implementation
Lambda offers businesses a way to run code without managing servers, providing significant operational and cost benefits:
- Zero Infrastructure Management: Eliminate server provisioning, patching, and scaling concerns
- Precise Cost Alignment: Pay only for compute time consumed, not for idle resources
- Automatic Scaling: Handle any workload size from a few requests per day to thousands per second
- Microservices Architecture: Build applications composed of small, independent functions
- Event-Driven Processing: Respond to business events in real-time
Common Use Cases:
- Data processing (e.g., real-time file processing, log analysis)
- Backend for web, mobile, and IoT applications
- Task automation and scheduled jobs
- Stream processing with Kinesis integrations
- Rapid prototyping and building MVPs with minimal infrastructure
Cost Model: Lambda charges based on number of requests, duration of execution, and memory allocated. The free tier includes 1 million free requests and 400,000 GB-seconds of compute time per month.
Business Value
AWS Lambda is a serverless compute service that runs your code in response to events without provisioning or managing servers.
- Execution Model: Code runs in stateless containers initialized on demand and automatically scaled
- Supported Runtimes: Node.js, Python, Java, Go, .NET, Ruby, and custom runtimes
- Event Sources: Services that trigger Lambda functions, including S3, DynamoDB, API Gateway, SQS, EventBridge, etc.
- Limits: Execution duration (15 min max), memory allocation (128MB-10GB), deployment package size (50MB zipped, 250MB unzipped)
- Concurrency: Number of function instances that can run simultaneously, with 1,000 concurrent executions per region by default
- Layers: Reusable components for sharing code, libraries, or custom runtimes across functions
Sample Lambda Function (Node.js):
exports.handler = async (event) => {
console.log('Event:', JSON.stringify(event, null, 2));
const response = {
statusCode: 200,
body: JSON.stringify({
message: 'Hello from Lambda!',
input: event,
}),
};
return response;
};
Container Services (ECS/EKS)
Technical Implementation
Container services provide significant business advantages for application deployment and management:
Benefits of Both ECS and EKS
- Application Portability: Package applications and dependencies in containers that run consistently across environments
- Resource Efficiency: Higher resource utilization compared to traditional VMs
- Scalability: Easily scale containerized applications based on demand
- CI/CD Integration: Streamlined deployments and rollbacks
- Hybrid Compatibility: Run the same containers on-premises and in the cloud
ECS vs EKS: Business Considerations
- ECS: Lower operational overhead, deeper AWS integration, simpler learning curve, ideal for AWS-focused organizations
- EKS: Better for multi-cloud strategies, existing Kubernetes expertise, access to the broader Kubernetes ecosystem, avoiding vendor lock-in
The container services pricing model is based on the underlying infrastructure (EC2 or Fargate) with no additional charge for ECS itself. EKS has an additional control plane charge of $0.10 per hour per cluster.
Business Value
AWS offers two primary container orchestration services: Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
Amazon ECS
- Architecture: AWS-specific container orchestration service
- Task Definitions: JSON files defining containers, resources, networking, and volumes
- Launch Types: EC2 (self-managed hosts) or Fargate (serverless)
- Services: Maintain desired count of tasks and can integrate with load balancers
- Deep AWS Integration: Native integration with IAM, CloudWatch, VPC, etc.
Amazon EKS
- Architecture: Managed Kubernetes service compatible with standard K8s tools
- Control Plane: AWS manages the Kubernetes control plane across multiple AZs
- Node Types: EC2 instances, Fargate (serverless), or self-managed nodes
- Add-ons: Support for standard Kubernetes add-ons like CoreDNS, kube-proxy, etc.
- Compatibility: Works with standard Kubernetes tooling (kubectl, Helm, etc.)
Storage Services
- S3
- EBS
Amazon Simple Storage Service (S3)
Technical Implementation
S3 provides secure, durable, and scalable object storage for a wide range of business use cases:
- 99.999999999% Durability: Designed to protect data from site-level failures, errors, and threats
- Cost Optimization: Storage classes allow balancing access needs with cost efficiency
- Scalability: Virtually unlimited storage capacity that grows with your business
- Global Access: Content accessible worldwide with built-in capabilities to accelerate delivery
- Data Protection: Versioning, replication, and point-in-time recovery protect critical business assets
Common Business Use Cases:
- Backup and disaster recovery
- Content and media distribution
- Data lakes and big data analytics
- Static website hosting
- Mobile and web application asset storage
- Long-term archiving and compliance storage
Cost Structure: Pay only for what you use, with charges based on:
- Storage volume by storage class
- Data transfer out of the AWS region
- Request and data retrieval pricing
- Data management features (e.g., inventory, analytics)
Business Value
Amazon S3 is an object storage service offering industry-leading scalability, availability, security, and performance.
- Object Storage Model: Store and retrieve any amount of data as objects (files) within buckets (containers)
- Data Consistency: Strong read-after-write consistency for all operations
- Storage Classes:
- S3 Standard: General-purpose storage for frequently accessed data
- S3 Intelligent-Tiering: Automatically moves objects between access tiers
- S3 Standard-IA: For infrequently accessed data with rapid access when needed
- S3 One Zone-IA: Lower-cost option for infrequently accessed data that doesn't require multi-AZ resilience
- S3 Glacier: Low-cost storage for data archiving with retrieval times from minutes to hours
- S3 Glacier Deep Archive: Lowest-cost storage for long-term retention with retrieval time of hours
- Access Control: Bucket policies, IAM policies, Access Control Lists (ACLs), Access Points
- Data Management: Lifecycle policies, versioning, replication, object locking
- Performance Optimization: Multipart uploads, transfer acceleration, byte-range fetches
Common S3 Operations:
# Upload a file to S3
aws s3 cp myfile.txt s3://my-bucket/
# List objects in a bucket
aws s3 ls s3://my-bucket/
# Create a bucket with versioning enabled
aws s3api create-bucket --bucket my-new-bucket --region us-east-1
aws s3api put-bucket-versioning --bucket my-new-bucket --versioning-configuration Status=Enabled
# Configure lifecycle policy
aws s3api put-bucket-lifecycle-configuration --bucket my-bucket --lifecycle-configuration file://lifecycle.json
Amazon Elastic Block Store (EBS)
Technical Implementation
EBS provides reliable block storage that supports business-critical applications with specific performance needs:
- Data Persistence: Maintain data independently of EC2 instance lifecycle
- Performance Optimization: Match storage performance to application requirements
- Data Protection: 99.999% durability with point-in-time snapshots for backup and disaster recovery
- Cost Flexibility: Choose storage type based on price/performance needs
- Security Compliance: Native encryption for data-at-rest requirements
Common Business Use Cases:
- Enterprise applications requiring consistent I/O performance
- Relational and NoSQL databases
- Data warehousing applications
- Business continuity with cross-region replication
- Development and test environments
Cost Considerations:
- Pricing based on volume type, size, and provisioned performance
- Snapshot storage charged at standard S3 rates
- No charges for data transfer between EBS and EC2 in the same AZ
- Cost optimization through right-sizing volumes and choosing appropriate volume types
Business Value
Amazon EBS provides persistent block storage volumes for use with EC2 instances, functioning similar to network-attached hard drives.
- Volume Types:
- General Purpose SSD (gp2/gp3): Balance of price and performance for a wide variety of workloads
- Provisioned IOPS SSD (io1/io2): High-performance for I/O-intensive workloads like databases
- Throughput Optimized HDD (st1): Low-cost for frequently accessed, throughput-intensive workloads
- Cold HDD (sc1): Lowest cost for less frequently accessed workloads
- Key Features:
- Encryption: AES-256 encryption using AWS KMS
- Snapshots: Point-in-time backups stored in S3
- Elasticity: Volumes can be resized and type can be changed on the fly
- Multi-Attach: Attach the same volume to multiple instances (io1/io2 only)
- Performance: Up to 256,000 IOPS and 4,000 MB/s throughput per volume (io2 Block Express)
- Volume Lifecycle: Independent from EC2 instances, persisting beyond instance termination
Common EBS Operations:
# Create a new EBS volume
aws ec2 create-volume --availability-zone us-east-1a --size 100 --volume-type gp3
# Attach a volume to an instance
aws ec2 attach-volume --volume-id vol-1234567890abcdef0 --instance-id i-1234567890abcdef0 --device /dev/sdf
# Create a snapshot of a volume
aws ec2 create-snapshot --volume-id vol-1234567890abcdef0 --description "Daily backup"
# Create a volume from a snapshot
aws ec2 create-volume --availability-zone us-east-1a --snapshot-id snap-1234567890abcdef0
Database Services
- RDS
- DynamoDB
Amazon Relational Database Service (RDS)
Technical Implementation
RDS eliminates the operational burden of database management, allowing businesses to focus on application development and business growth:
- Reduced Administrative Overhead: AWS handles routine database tasks like patching, backups, and monitoring
- High Availability: Multi-AZ deployments provide business continuity with automated failover
- Scalability: Easy vertical scaling by changing instance class and horizontal scaling with read replicas
- Regulatory Compliance: Supports compliance with requirements for encryption and backup retention
- Pay-as-you-go Model: No upfront costs for database licenses or infrastructure
Common Business Use Cases:
- Web applications and e-commerce platforms
- ERP, CRM, and other enterprise applications
- SaaS applications requiring relational data storage
- Mobile app backends
- Legacy application migration to the cloud
Cost Optimization Strategies:
- Reserved Instances for predictable workloads
- Appropriate instance sizing based on workload
- Aurora Serverless for variable or unpredictable workloads
- Storage autoscaling to match growth patterns
- Multi-tenant database architectures for SaaS applications
Business Value
Amazon RDS is a managed relational database service that makes it easier to set up, operate, and scale a relational database in the cloud.
- Supported Engines: Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, SQL Server
- Deployment Options:
- Single-AZ: Database in one Availability Zone
- Multi-AZ: Synchronous standby replica in a different AZ for high availability
- Read Replicas: Asynchronous replicas for read scaling and geographic distribution
- Managed Features:
- Automated backups with point-in-time recovery
- Automated software patching
- Monitoring and metrics via CloudWatch
- Scaling (vertical and horizontal with read replicas)
- Security: Network isolation via VPC, encryption at rest (KMS) and in transit (SSL), IAM database authentication
- Performance: Instance types optimized for database workloads, IOPS provisioning, Performance Insights monitoring
Sample RDS Commands:
# Create a MySQL database instance
aws rds create-db-instance \
--db-instance-identifier mydb \
--db-instance-class db.t3.small \
--engine mysql \
--master-username admin \
--master-user-password secret99 \
--allocated-storage 20
# Create a read replica
aws rds create-db-instance-read-replica \
--db-instance-identifier mydb-replica \
--source-db-instance-identifier mydb
# Enable automated backups
aws rds modify-db-instance \
--db-instance-identifier mydb \
--backup-retention-period 7 \
--apply-immediately
Amazon DynamoDB
Technical Implementation
DynamoDB offers compelling business advantages for applications requiring high scalability, availability, and low-latency data access:
- Serverless Operations: Zero operational overhead with no servers to manage
- Unlimited Scale: Automatically scales to handle any amount of traffic without performance degradation
- High Availability: Data automatically replicated across multiple AZs with 99.999% availability
- Predictable Performance: Consistent single-digit millisecond response times regardless of data volume
- Cost Flexibility: Choose between on-demand pricing for variable workloads or provisioned capacity for predictable ones
Common Business Use Cases:
- Mobile and web applications with variable traffic patterns
- Gaming applications requiring low-latency data access
- IoT data collection and processing
- Session management and user profile storage
- High-volume transaction processing systems
- Microservices requiring scalable data storage
Business Impact:
- Reduce time-to-market by eliminating database administration tasks
- Support rapid business growth without database re-architecting
- Consistent user experience even during peak traffic events
- Precise capacity planning with on-demand pricing for new applications
- Global presence with multi-region tables for improved customer experience
Business Value
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
- Data Model: Key-value and document database with flexible schema
- Key Structure:
- Partition Key: Determines data distribution across partitions
- Sort Key (optional): Enables range queries and hierarchical organization
- Secondary Indexes:
- Global Secondary Indexes (GSIs): Different partition key than base table
- Local Secondary Indexes (LSIs): Same partition key but different sort key
- Capacity Modes:
- Provisioned: Define read/write capacity units in advance
- On-Demand: Pay-per-request pricing with no capacity planning
- Features: ACID transactions, TTL for automatic item expiration, Streams for change data capture, global tables for multi-region replication
- Performance: Single-digit millisecond response times, unlimited throughput scaling
Sample DynamoDB Operations:
# Create a table
aws dynamodb create-table \
--table-name Users \
--attribute-definitions AttributeName=UserId,AttributeType=S \
--key-schema AttributeName=UserId,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
# Put an item
aws dynamodb put-item \
--table-name Users \
--item '{"UserId": {"S": "user123"}, "Name": {"S": "John Doe"}, "Email": {"S": "john@example.com"}}'
# Query items
aws dynamodb query \
--table-name Orders \
--key-condition-expression "CustomerId = :id" \
--expression-attribute-values '{":id": {"S": "customer123"}}'
Networking Services
- VPC
- CloudFront
Amazon Virtual Private Cloud (VPC)
Technical Implementation
VPC provides foundational network security and control that supports business and regulatory requirements:
- Network Isolation: Complete control over your virtual networking environment
- Security Posture: Multiple layers of security controls to protect applications and data
- Hybrid Connectivity: Securely connect cloud resources to on-premises infrastructure
- Compliance Framework: Network design that supports regulatory requirements (PCI DSS, HIPAA, etc.)
- Business Continuity: Multi-AZ architectures for high availability
Business Use Cases:
- Hosting multi-tier applications with security isolation between tiers
- Creating development, test, and production environments with network separation
- Extending corporate networks to the cloud in hybrid scenarios
- Implementing regulatory compliant network architectures
- Providing secure remote access to corporate resources
Cost Considerations:
- VPC itself has no additional cost
- Charges apply for VPN connections, NAT gateways, Transit Gateways, and data transfer
- Consider network design to minimize data transfer costs between AZs
- Private connectivity options can reduce internet data transfer costs
Business Value
Amazon VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define.
- Network Architecture:
- VPC: A logically isolated virtual network in AWS with a specified CIDR block
- Subnets: Segments of a VPC's IP address range where you place resources
- Route Tables: Control traffic routing between subnets and gateways
- Internet Gateway: Connects VPC to the internet
- NAT Gateway: Enables outbound internet for private subnets
- Security Controls:
- Security Groups: Stateful firewall rules for resources (instance level)
- Network ACLs: Stateless firewall rules for subnets (subnet level)
- Network Firewall: Advanced network security service
- Connectivity Options:
- VPC Peering: Connect VPCs privately
- Transit Gateway: Hub for connecting VPCs and on-premises networks
- VPN Connections: Secure connections to on-premises networks
- Direct Connect: Dedicated network connection to AWS
- Additional Features: Flow Logs for traffic monitoring, Endpoints for private connections to AWS services, Traffic Mirroring for packet inspection
VPC Architecture Example:
# Create a VPC with CIDR block
aws ec2 create-vpc --cidr-block 10.0.0.0/16
# Create public and private subnets
aws ec2 create-subnet --vpc-id vpc-123456 --cidr-block 10.0.1.0/24 --availability-zone us-east-1a
aws ec2 create-subnet --vpc-id vpc-123456 --cidr-block 10.0.2.0/24 --availability-zone us-east-1b
# Create and attach internet gateway
aws ec2 create-internet-gateway
aws ec2 attach-internet-gateway --internet-gateway-id igw-123456 --vpc-id vpc-123456
# Create security group for web servers
aws ec2 create-security-group --group-name WebSG --description "Web servers" --vpc-id vpc-123456
aws ec2 authorize-security-group-ingress --group-id sg-123456 --protocol tcp --port 80 --cidr 0.0.0.0/0
Amazon CloudFront
Technical Implementation
CloudFront delivers significant business benefits for organizations with global audiences and performance-sensitive content:
- Global Reach: Deliver content quickly to users worldwide from 410+ points of presence
- Performance Improvement: Reduce latency and improve user experience, leading to higher engagement and conversion rates
- Cost Efficiency: Reduce origin server load and data transfer costs from origin to internet
- Security Enhancement: Shield origins from direct access and protect against common web attacks
- Scalability: Handle traffic spikes without infrastructure changes
Business Impact:
- Improved conversion rates due to faster page load times
- Reduced abandonment rates for media streaming and downloads
- Enhanced global brand presence with consistent performance worldwide
- Lower infrastructure costs compared to maintaining global server deployments
- Improved SEO rankings due to better page performance metrics
Use Cases:
- E-commerce sites seeking to reduce load times to improve sales
- Media companies distributing video, audio, and images to global audiences
- SaaS applications requiring fast, secure delivery of dynamic content
- Gaming companies delivering downloads, updates, and dynamic content
- Mobile apps needing fast API responses from global locations
Business Value
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds.
- Architecture:
- Edge Locations: Global network of data centers that cache content close to users
- Regional Edge Caches: Larger caches that sit between origin servers and edge locations
- Origin: The source of content (S3 bucket, EC2 instance, load balancer, custom HTTP server)
- Distribution: The CloudFront configuration unit that defines behavior
- Features:
- Cache Behaviors: Rules for how content should be cached and served
- Origin Groups: Failover between primary and secondary origins
- Field-Level Encryption: Protect sensitive data throughout the system
- Lambda@Edge: Run code at edge locations to customize content delivery
- CloudFront Functions: Lightweight functions for HTTP request/response manipulation
- Security: HTTPS with custom SSL certificates, AWS Shield for DDoS protection, geo-restriction, signed URLs and cookies
- Performance Optimization: Compression, TCP optimizations, persistent connections, origin shield
CloudFront Configuration Example:
# Create a CloudFront distribution pointing to an S3 bucket
aws cloudfront create-distribution \
--origin-domain-name mybucket.s3.amazonaws.com \
--default-root-object index.html \
--default-cache-behavior '{"TargetOriginId":"S3-mybucket","ViewerProtocolPolicy":"redirect-to-https","AllowedMethods":{"Quantity":2,"Items":["GET","HEAD"]},"CachePolicyId":"658327ea-f89d-4fab-a63d-7e88639e58f6"}'
# Invalidate cached content
aws cloudfront create-invalidation \
--distribution-id EDFDVBD6EXAMPLE \
--paths "/images/*" "/css/style.css"
# Add Lambda@Edge function to a distribution
aws cloudfront update-distribution \
--id EDFDVBD6EXAMPLE \
--default-cache-behavior '{"LambdaFunctionAssociations":{"Quantity":1,"Items":[{"EventType":"viewer-request","LambdaFunctionARN":"arn:aws:lambda:us-east-1:123456789012:function:my-function:1"}]}}'
Security and Identity Services
- IAM
- KMS
AWS Identity and Access Management (IAM)
Technical Implementation
IAM is fundamental to security governance in AWS, providing organizations with the controls needed to meet security and compliance requirements:
- Security Governance: Centralized control over AWS resources access
- Compliance Support: Helps meet regulatory requirements for access controls and separation of duties
- Risk Reduction: Minimize security risks by implementing least privilege
- Operational Efficiency: Streamline access management across the organization
- Identity Federation: Use existing identity systems without recreating users in AWS
Business Impact:
- Prevent unauthorized access to sensitive data and resources
- Enable secure delegation of responsibilities to teams and services
- Maintain audit trail of access for compliance reporting
- Reduce operational overhead through automation and federation
- Support secure application development with fine-grained permissions
Implementation Strategies:
- Role-Based Access: Align IAM permissions with job functions
- Attribute-Based Access: Dynamic permissions based on user or resource attributes
- Federated Access: Integration with corporate directories
- Programmatic Access: API-based access for applications and services
- Emergency Access: Break-glass procedures for urgent situations
Business Value
AWS IAM enables you to securely control access to AWS services and resources for your users and applications.
- Core Components:
- Users: Identities representing people or applications
- Groups: Collections of users with shared permissions
- Roles: Identities that can be assumed by trusted entities (services, applications, users)
- Policies: JSON documents defining permissions
- Policy Types:
- Identity-based: Attached to IAM users, groups, or roles
- Resource-based: Attached directly to resources (S3 buckets, SQS queues)
- Permission boundaries: Set maximum permissions for an entity
- Service control policies (SCPs): Define maximum permissions in AWS Organizations
- Features:
- Multi-factor authentication (MFA)
- Federation with external identity providers (SAML, OIDC)
- Temporary security credentials
- Access Analyzer for identifying resource access
- Best Practices: Least privilege principle, rotation of credentials, IAM Access Advisor, policy conditions
IAM Configuration Examples:
# Create a user
aws iam create-user --user-name johndoe
# Create and attach a policy
aws iam create-policy --policy-name S3ReadOnly --policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:Get*", "s3:List*"],
"Resource": "*"
}]
}'
aws iam attach-user-policy --user-name johndoe --policy-arn arn:aws:iam::123456789012:policy/S3ReadOnly
# Create a role for EC2 instances
aws iam create-role --role-name EC2S3Access --assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
AWS Key Management Service (KMS)
Technical Implementation
KMS provides a centralized control point for cryptographic key management, helping organizations protect sensitive data and meet compliance requirements:
- Data Protection: Secure sensitive information at rest and in transit
- Regulatory Compliance: Help meet requirements for GDPR, HIPAA, PCI DSS, and other regulations
- Key Lifecycle Management: Managed creation, rotation, and deletion of encryption keys
- Centralized Audit: Comprehensive logging of all key usage via CloudTrail
- Reduced Risk: Hardware security modules (HSMs) protect key material
Business Use Cases:
- Protecting personally identifiable information (PII) and sensitive customer data
- Securing financial information and transaction data
- Implementing encryption for intellectual property and trade secrets
- Supporting multi-tenant SaaS applications with data isolation
- Enabling secure data sharing across organizations or departments
Implementation Considerations:
- Access Patterns: Design key access policies to match organizational structure
- Key Hierarchies: Implement envelope encryption for efficient data protection
- Key Sharing: Cross-account access for enterprise-wide key management
- Monitoring: Track and alert on suspicious key usage patterns
- Cost Management: Balance between number of keys and operations for optimal cost
Business Value
AWS KMS is a managed service that makes it easy to create and control cryptographic keys for your applications and AWS services.
- Key Types:
- Customer Master Keys (CMKs): Primary resources in KMS
- AWS-managed keys: Created and managed by AWS for use with specific services
- Customer-managed keys: Created and managed by you for your specific use cases
- AWS-owned keys: Used by AWS on a shared basis across accounts
- Key Specifications:
- Symmetric keys (AES-256): Default key type for most uses
- Asymmetric keys (RSA or ECC): For encryption/decryption or signing/verification outside AWS
- HMAC keys: For generating and verifying HMAC tags
- Key Operations:
- Encrypt/Decrypt data
- Generate data keys for client-side encryption
- Sign and verify messages
- Generate random data
- Security Controls: Key policies, IAM policies, grants, ABAC with tags, key rotation, VPC endpoints
- Integration: Native integration with AWS services like S3, RDS, EBS, Lambda, etc.
KMS Usage Examples:
# Create a customer managed key
aws kms create-key --description "Application data encryption key"
# Encrypt data
aws kms encrypt \
--key-id 1234abcd-12ab-34cd-56ef-1234567890ab \
--plaintext "My secret data" \
--output text --query CiphertextBlob | base64 --decode > encrypted_data.bin
# Decrypt data
aws kms decrypt \
--ciphertext-blob fileb://encrypted_data.bin \
--output text --query Plaintext | base64 --decode
# Generate a data key for client-side encryption
aws kms generate-data-key \
--key-id alias/application-key \
--key-spec AES_256
Management and Monitoring
- CloudWatch
- CloudFormation
Amazon CloudWatch
Technical Implementation
CloudWatch provides observability across your entire infrastructure and application stack, enabling you to optimize performance, maintain availability, and respond quickly to issues:
- System-Wide Visibility: Single pane of glass for monitoring all resources and applications
- Proactive Management: Detect and address issues before they impact users
- Operational Efficiency: Automate responses to common problems
- Performance Optimization: Identify bottlenecks and opportunities for improvement
- Cost Management: Monitor and optimize resource utilization
Business Impact:
- Reduced mean time to detection (MTTD) and resolution (MTTR) for issues
- Improved application reliability and user experience
- Lower operational costs through automation and efficient resource use
- Better capacity planning based on historical trends
- Enhanced security posture through anomaly detection
Implementation Strategy:
- Foundational Monitoring: Start with basic infrastructure metrics
- Application Insights: Add custom metrics and logs for application context
- Proactive Alerting: Configure alarms for critical thresholds
- Automated Response: Use EventBridge and Lambda to respond to alerts
- Business Metrics: Correlate technical metrics with business outcomes
Business Value
Amazon CloudWatch is a monitoring and observability service that provides data and actionable insights for AWS resources and applications.
- Core Components:
- Metrics: Time-series data points for resources and applications
- Logs: Centralized log collection and analysis
- Events: Stream of system events describing changes in AWS resources
- Alarms: Notifications based on metrics crossing thresholds
- Dashboards: Customizable visualization of metrics and alarms
- Advanced Features:
- Synthetics: Canary scripts that monitor endpoints and APIs
- ServiceLens: Application and transaction tracing
- Container Insights: Monitoring for containerized applications
- Lambda Insights: Monitoring for serverless applications
- Contributor Insights: Analyze high-cardinality data
- Integration: Works with AWS services, on-premises resources, and custom applications
- Metrics Resolution: Standard (1-minute) and high-resolution (1-second) metrics
CloudWatch Examples:
# Create a metric alarm for CPU utilization
aws cloudwatch put-metric-alarm \
--alarm-name cpu-utilization \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 2 \
--metric-name CPUUtilization \
--namespace AWS/EC2 \
--period 300 \
--statistic Average \
--threshold 80 \
--alarm-actions arn:aws:sns:us-east-1:123456789012:my-topic \
--dimensions "Name=InstanceId,Value=i-1234567890abcdef0"
# Create a dashboard
aws cloudwatch put-dashboard \
--dashboard-name MyDashboard \
--dashboard-body '{"widgets":[{"type":"metric","x":0,"y":0,"width":12,"height":6,"properties":{"metrics":[["AWS/EC2","CPUUtilization","InstanceId","i-1234567890abcdef0"]],"view":"timeSeries","stacked":false,"title":"EC2 Instance CPU"}}]}'
# Create a log group
aws logs create-log-group --log-group-name /my-application/logs
AWS CloudFormation
Technical Implementation
CloudFormation enables organizations to treat infrastructure as code, bringing software development practices to infrastructure management:
- Consistency and Standardization: Create reliable, repeatable infrastructures across environments
- Automation: Eliminate manual processes and human error
- Version Control: Track infrastructure changes using standard source control tools
- Rapid Provisioning: Deploy entire environments quickly for development, testing, and production
- Comprehensive Management: Manage the complete lifecycle of AWS resources
Business Impact:
- Faster Time to Market: Rapidly deploy new applications and features
- Reduced Operational Risk: Eliminate configuration drift and manual errors
- Improved Compliance: Document and enforce security and compliance requirements as code
- Resource Optimization: Easily create and tear down environments to control costs
- Enhanced Collaboration: Enable DevOps practices between development and operations teams
Implementation Strategy:
- Start Small: Begin with simple resources, then build more complex patterns
- Modular Design: Create reusable components with nested stacks
- Environment Segregation: Use parameters and conditions for different environments
- Pipeline Integration: Include CloudFormation in CI/CD pipelines
- Governance: Implement organizational standards via stack policies and AWS Config
Business Value
AWS CloudFormation provides a common language to describe and provision all the infrastructure resources in your cloud environment using infrastructure as code (IaC).
- Template Structure:
- Resources: AWS resources you want to create
- Parameters: Custom values to pass to the template
- Mappings: Conditional values based on region, environment, etc.
- Conditions: Control whether resources are created
- Outputs: Values that can be imported into other stacks
- Template Formats: JSON and YAML
- Stack Management:
- Create, update, and delete stacks
- Change sets to preview changes
- Stack policies to prevent accidental updates
- Drift detection to identify manual changes
- Advanced Features:
- Nested stacks for reusable components
- Stack sets for multi-account and multi-region deployments
- Custom resources for non-AWS resources
- Macros for template transformations
CloudFormation Example (YAML):
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Basic web application infrastructure'
Parameters:
EnvironmentType:
Type: String
Default: dev
AllowedValues: [dev, test, prod]
Resources:
VPC:
Type: 'AWS::EC2::VPC'
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc'
WebServerSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: Allow HTTP and SSH access
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
WebServerInstance:
Type: 'AWS::EC2::Instance'
Properties:
InstanceType: !If [IsProd, t3.medium, t3.micro]
SecurityGroupIds:
- !Ref WebServerSecurityGroup
ImageId: ami-0c55b159cbfafe1f0
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-webserver'
Conditions:
IsProd: !Equals [!Ref EnvironmentType, prod]
Outputs:
WebServerPublicIP:
Description: Public IP of the web server
Value: !GetAtt WebServerInstance.PublicIp
Application Integration
- SQS
- EventBridge
Amazon Simple Queue Service (SQS)
Technical Implementation
SQS provides a reliable, scalable message queue that helps businesses build resilient, loosely-coupled systems:
- System Decoupling: Buffer between components allows independent scaling and failure isolation
- Workload Smoothing: Handle traffic spikes without service degradation
- Fault Tolerance: Messages persist even if processing components fail
- Simplified Architecture: Eliminate complex coordination between distributed components
- Cost Optimization: Process messages at a rate that matches your budget and needs
Business Use Cases:
- Order Processing: Reliably manage order flows in e-commerce systems
- Media Processing: Handle video/image processing at scale
- Log Processing: Collect and process logs from distributed systems
- Task Distribution: Distribute work across multiple processors
- Email Sending: Queue email requests for reliable delivery
- Batch Processing: Collect transactions for efficient batch processing
Implementation Strategy:
- Choose Queue Type: Standard for high throughput, FIFO when order matters
- Message Lifecycle: Configure appropriate visibility timeout and retention
- Error Handling: Implement dead-letter queues for unprocessable messages
- Auto-Scaling: Scale consumers based on queue depth
- Monitoring: Track queue metrics with CloudWatch
Business Value
Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
- Queue Types:
- Standard Queues: High throughput, at-least-once delivery, best-effort ordering
- FIFO Queues: First-in-first-out delivery, exactly-once processing, message groups
- Message Handling:
- Message Retention: Up to 14 days
- Message Size: Up to 256KB
- Visibility Timeout: Period during which messages are invisible after being received
- Dead-Letter Queues: Capture messages that can't be processed
- Security: Server-side encryption, IAM policies, VPC endpoints, temporary credentials
- Integration: Works with Lambda, SNS, EventBridge, and most AWS services
SQS Examples:
# Create a standard queue
aws sqs create-queue --queue-name my-standard-queue
# Create a FIFO queue
aws sqs create-queue \
--queue-name my-fifo-queue.fifo \
--attributes FifoQueue=true,ContentBasedDeduplication=true
# Send a message
aws sqs send-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-standard-queue \
--message-body "Hello, SQS!"
# Receive messages
aws sqs receive-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-standard-queue \
--max-number-of-messages 10 \
--wait-time-seconds 20
# Delete a message after processing
aws sqs delete-message \
--queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-standard-queue \
--receipt-handle AQEBTpyL...
Amazon EventBridge
Technical Implementation
EventBridge enables organizations to build event-driven architectures that respond quickly to business events and system changes:
- Application Integration: Connect disparate systems without tight coupling
- Workflow Automation: Trigger processes automatically based on events
- Real-time Response: React immediately to business events
- Simplified Architecture: Reduce custom integration code
- Operational Visibility: Track events across the entire business
Business Use Cases:
- SaaS Integration: Connect with third-party SaaS providers
- Microservices Orchestration: Coordinate processes across microservices
- IoT Data Processing: Process and route IoT device data
- Business Process Automation: Trigger workflows based on system events
- Operational Monitoring: Respond to infrastructure changes and alerts
- Data Lake Updates: Trigger ETL processes when new data arrives
Business Benefits:
- Reduced Integration Costs: Built-in connectors eliminate custom integration work
- Increased Business Agility: Quickly respond to changing business conditions
- Enhanced Customer Experience: Real-time responses to customer interactions
- Operational Efficiency: Automate routine processes triggered by events
- Improved Visibility: Track events across the business in a single system
Business Value
Amazon EventBridge is a serverless event bus service that makes it easy to connect applications together using data from your own applications, integrated SaaS applications, and AWS services.
- Core Components:
- Event Buses: Routes that receive events from sources and send to targets
- Rules: Patterns that determine which events are processed
- Targets: Destinations for events that match rules
- Event Patterns: JSON patterns that filter events
- Schemas: Structures that define event format
- Bus Types:
- Default Bus: Receives events from AWS services
- Custom Bus: For your own applications and events
- Partner Bus: For SaaS partner integrations
- Features:
- Content-based filtering
- Event archive and replay
- Schema discovery and registry
- Dead-letter queues
- Cross-account and cross-region event delivery
EventBridge Examples:
# Create a custom event bus
aws events create-event-bus --name my-application-bus
# Create a rule to route events to Lambda
aws events put-rule \
--name "OrderProcessingRule" \
--event-bus-name my-application-bus \
--event-pattern '{
"source": ["com.mycompany.orders"],
"detail-type": ["Order Placed"],
"detail": {
"status": ["new"]
}
}'
# Add a Lambda function as a target
aws events put-targets \
--rule OrderProcessingRule \
--event-bus-name my-application-bus \
--targets "Id"="1","Arn"="arn:aws:lambda:us-east-1:123456789012:function:ProcessOrder"
# Send a custom event
aws events put-events \
--entries '[{
"Source": "com.mycompany.orders",
"DetailType": "Order Placed",
"Detail": "{\"orderId\": \"12345\", \"status\": \"new\", \"amount\": 123.45}",
"EventBusName": "my-application-bus"
}]'